kochbeck | 9 months ago | on: Building supercomputers for autocrats probably isn't good for democracy
kochbeck's comments
kochbeck | 2 years ago | on: Watsonx: IBM's code assistant for turning COBOL into Java
In a way, the ease IS the problem: the runtime environment for COBOL (and other stuff on the mainframe) assumes that the underlying platform and OS deal with the really hard stuff like HA and concurrent data access and resource cost management. Which, on the mainframe, they do.
Now, contrast that with doing the same thing in, say, a Linux container on AWS. From the stock OS, can you request a write that guarantees lockstep execution across multiple cores and cross-checks the result? No. Can you request multisite replication of the action and verified synchronous on-processor execution (not just disk replication) at both sites such that your active-active multisite instance is always in sync? No. Can you assume that anything written will also stream to tape / cold storage for an indelible audit record? No. Can you request additional resources from the hypervisor that cost more money from the application layer and signal the operator for expense approval? No. (Did I intentionally choose features that DHT technology could replace one day? Yes, I did, and thanks for noticing.)
On the mainframe, these aren’t just OS built-ins. They’re hardware built-ins. Competent operators know how to both set them up and maintain them such that application developers and users never even have to ask for them (ideally). Good shops even have all the runtime instrumentation out there too—no need for things like New Relic or ServiceNow. Does it cost omg so much money? Absolutely. Omg you could hire an army for what it costs. But it’s there and has already been working for decades.
God knows it’s not a panacea—if I never open another session of the 3270 emulator, it’ll be too soon. And a little piece of me died inside every time I got dropped to the CICS command line. And don’t even get me started on the EBCDIC codepage.
Folks are like, “But wait, I can do all of that in a POSIX environment with these modern tools. And UTF-8 too dude. Stop crying.” Yup, you sure can. I’ve done it too. But when we’re talking about AI lifting and shifting code from the mainframe to a POSIX environment, the 10% it can’t do for you is… all of that. It can’t make fundamental architectural decisions for you. Because AI doesn’t (yet) have a way to say, “This is good and that is bad.” It has no qualitative reasoning, nor anticipatory scenario analysis, nor decision making framework based on an existing environment. It’s still a ways away from even being able to say, “If I choose this architecture, it’ll blow the project budget.” And that’s a relatively easy, computable guardrail.
If you want to see a great example of someone who built a whole-body architectural replacement for a big piece of the mainframe, check out Fiserv’s Finxact platform. In this case, they replaced the functionality (but not the language) of the MUMPS runtime environment rather than COBOL, but the theory is the same. It took them 3 companies to get it right. More than $100mm in investment. But now it has all the fire-and-forget features that banks expect on the mainframe. Throw it a transaction entry, and It Just Works(tm).
And Finxact screams on AWS which is the real miracle because, if you’ve only ever worked on general-purpose commodity hardware like x86-based Linux machines, you have no clue how much faster purpose-built transaction processors can be.
You know that GPGPU thing you kids have been doing lately? Imagine you’d been working on that since the 1960s and the competing technology had access to all the advances you had but had zero obligation to service workloads other than the ones it was meant for. That’s the mainframe. You’re trying to compete with multiple generations of very carefully tuned muscle memory PLUS every other tech advancement that wasn’t mainframe-specific PLUS it can present modern OSes as a slice of itself to make the whole thing more approachable (like zLinux) PLUS just in case you get close to beating it, it has the financial resources of half the banks, brokerages, transportation companies, militaries, and governments in the world to finance it. Oh, and there’s a nearly-two century old company with a moral compass about 1% more wholesome than the Devil whose entire existence rests on keeping a mortal lock on this segment of systems and has received either first- or second-most patents every year of any company in the world for decades.
It’s possible to beat but harder than people make it out to be. It makes so many of the really hard architectural problems “easy” (for certain definitions of the word easy that do not disallow for “and after I spin up a new instance of my app, I want to drink poison on the front lawn of IBM HQ while blasting ‘This Will End in Tears’ because the operator console is telling me to buy more MIPs but my CIO is asking when we can migrate this 40-year old pile of COBOL and HLASM to the cloud”).
Mainframes aren’t that hard. Nearly everyone who reads HN would be more than smart enough to master the environment, including the ancient languages and all the whackado OS norms like simulating punchcard outputs. But they’re also smart enough to not want to. THAT is the problem that makes elimination of the mainframe intractable. The world needs this level of built-in capability, but you have to be a bit nuts to want to touch the problem.
I have been to this hill. I can tell you I am not signing up to die on it, no matter how valuable it would be if we took the hill.
kochbeck | 3 years ago | on: Ask HN: Employers, why do you want us back in the office?
The top reason is, management wants workers back in the office because managers never learned how to manage people, so they practice management-by-walking-around, aka interrupt-driven behavior. Many companies have a culture of MBWA, and it’s a hard curse to break.
Another bad reason is, distanced work has led to a substantial reduction in workplace unfairness behaviors such as sexual harassment and race-based favoritism. And this, logically, has made female and minority employees more valuable and better performers. But in many workplaces, favoritism is the order of the day, and women and minorities were not the favorites. The favorites are now performing worse than the people they stepped on to be unfairly promoted, and it makes incompetent executives look, well… incompetent.
Another reason is that many people, particularly executives, have more authority, respect, or control in the workplace than they do at home. For quite a few people, their office has become their primary social outlet. And taking that away has proven unlivable for them.
The other reason that immediately came to mind is that executives are, by and large, older than the rank and file, and they (we) come from a time when building, maintaining, and overseeing an office space was both a critical part of the job and a source of pride / ego. For older management, offices are still a real-world manifestation of the success of the company that signals to other people how effective the leadership of the company is. People are less able to derive the same sense of awe from abstractions like sales numbers. If people don’t return to the office, it will not continue to make economic sense to have flashy offices, and this ego outlet will disappear.
Are these good reasons? They are not. But these reasons, honestly, ring truer to me than “hallway collisions.” In the real world, all motivating reasons are self-centered reasons, and executives simply don’t benefit from hallway and breakroom magic or mentoring of the young. They do perversely benefit from showy offices, discrimination, avoiding overt displays of their lack of skill, and forced social conduct, though.
kochbeck | 3 years ago | on: The branch banking model
Branches exist to handle and process A) cash demands, B) check and other non-specie instruments, and C) paper for commercial clients. If they’re a community or specialty bank, branches also exist to serve the particular, unusual needs of their community,—usually business needs. These special needs often include unusual skills such as assessing the quality of a crop or meeting with specialized experts.
That branches happen to also offer convenience to consumers is a happy accident, mostly, and it’s happier in that businesspeople are themselves consumers and often select their business bank based on where they personally bank. Branches are JUSTIFIED regulatorily by their public benefit which centers, in most cases, around consumer and SMB (which is to say, prosumer) access. But like many things, the regulatory rationale and the real purpose do not fully correspond. I’m sure you’re as shocked as I.
If branches were about sourcing consumer deposits, they would be uninsurable properties, because banks would burn their branches to the ground. Rest assured.
Source: I run banks.
kochbeck | 3 years ago | on: Capital One enters enterprise B2B software, new data management SaaS
CapOne may be too big to fail, but it’s not too big to receive a C
kochbeck | 3 years ago | on: Can the 64 and 128 survive? (1988) [pdf]
It felt like I had that C64 forever. I learned CBM BASIC, 6502 assembler, and even K
kochbeck | 3 years ago | on: Regina Rexx Interpreter
Thanks for the career Rexx!
kochbeck | 4 years ago | on: What if Sun Microsystems acquired Apple in 1996?
I always believed the net effect of this would be that Microsoft would suddenly have to do an about face and support one of the OSS compiler chains, probably GCC. At the time, they had a current Mach-compiled version of Windows that was still being maintained, and odds are Windows—not MacOS—would have been the ascendant Mach-based OS, because MS would have lost a lot of its ability to fix its problems by losing control of its dev chain. They’d need more radical abstraction than the NT kernel was giving them at the time. Because it was still a branch from OS/2 1.2 which was… special and half-baked. (It’s important to remember that Linux was still considered a toy by most—the “serious” OSS OS was still BSD. And if you had real workloads you ran Solaris, even though you knew Sun was somehow going to doom themselves. The world then looks nothing like the world now.)
This really would have obviated the need for Apple to sell to Sun. Instead, MS would never have made the rescue investment, Sun would continue to skitter off the rails, and Apple would have sold to… I dunno, probably someone weird like Sony. Remember them? Because MS going to Mach would have poisoned the shift from Copeland to NeXTStep… the world barely wanted one Mach-based OS, much less two. One neat side effect, though, is we would have probably seen something a lot like WSL back in 2000 or so. Because the Mach Win build took much more advantage of the OS “personalities” features than MacOS did.
Back then, all of this mattered a lot, because things were far less elegant than they are now. It’s hard to imagine how far we came in the intervening 25 years. So very far.
But in the middle of writing that brief, Judge Jackson shot his stupid mouth off, and I was like, “Welp, nobody’s getting split up now.” And I put it in my archive of good ideas that aren’t gonna happen.
So no, I don’t think there was ever a real scenario where Sun bought Apple.
kochbeck | 4 years ago | on: 10 Tiny PCs of the 1980s
There’s still a very active community around the 100, and there are a few old hardware guys who still make new expansions for it. Recently there’s even a CP/M board for it which means it can run a lot of apps like WordStar, making it very useful day-to-day. It’s a nice, distraction-free environment.
kochbeck | 4 years ago | on: Metacil(メタシル), a Pencil which can write for 16km without sharpening
Remember how standardized tests require you to use a “#2” pencil? That’s an HB pencil (approximately) in the standardized scale. Because it lays down a specific amount of graphite. If you use a harder pencil, it won’t, and you’d fail the test.
Nowadays we do drafting on the computer, but back in the pencil-and-paper days, I remember getting trained on using, for instance, a very hard (7-9H) lead to predraw my figures and put in guide and horizon rays, and then go in later with, say, a 2B to darken up only the real lines. Then finish with drafting powder (eraser shavings, basically) to wipe out the light lines.
Needless to say, even with graphite, harder ones didn’t lay down what you’d call an acceptable line.
kochbeck | 4 years ago | on: Ask HN: Does anyone know of the whereabouts of the source code of FlexOS/386?
kochbeck | 10 years ago | on: Ask HN: Speech recognition/synthesis witho Google?
kochbeck | 10 years ago | on: IBM System/360 Model 67
The main issue was job completion predictability - most things we do with computers are fundamentally batch, and almost all the really, really important ones like bank account daily settlement and reconciliation are totally batch. There's simply nothing to be done while you wait for the process to complete nor anything of higher priority that you'd want to preempt that task. So the question is, if the task is business-critical important or if it's critical to major institutions such as the global economy - like, say, the Depository Trust Corporation's nightly cross-trader settlement process which is, in fact, still a mainframe batch - why would you want the process to be anything other than a deterministic length of time for a fixed input? You'd be willing to commit a whole piece of hardware to getting the job done, right? As it turns out, that's the reason. There are an awful lot of things that are more important than economical full-utilization of a machine, and most of those tasks are still carried out on mainframes, and usually they're still done in batch.
There are a bunch of secondary reasons as well, though: a 3270 terminal ran in the thousands of dollars a unit in the 1980s; the network was really, really slow, and sharing the terminal server was worse than slow; if you were lucky(?) enough to have a token ring desktop and CM/2 on your machine so you didn't need a 5250 death-ray CRT next to you, you were unlucky enough to be on token ring and good luck with that; at 9am when the world woke up and logged in, the entire SYSPLEX ground to a halt waiting for all the interactive logins to complete, even though folks would then idle most of the day... on and on and on, and all of those were issues with time-sharing systems that, for most applications, worked just as well if you punched a record card (I know, right? Punch cards...), put it in a stack, and handed it off to the data processing department at 5pm.
If I still had $X billion in transactions to clear a day where X > a number that would get me jail time if I screwed up, I would probably still do it on a zSeries mainframe running CICS and IMS but running almost totally in batch. Because why chance it?
kochbeck | 13 years ago | on: An Autopsy of a Dead Social Network
kochbeck | 13 years ago | on: How to Choose Health Insurance – Startup Edition
The program is called Healthy SF. If you're presently living in SF and not covered, check the enrollment site and get registered right away.
kochbeck | 14 years ago | on: The Sandbox : banning LaTeX from the Mac App Store
For instance, can you still create a named pipe inside the sandbox that the parent process can have the right to use? Write to it, then you could just have a consumer that takes the output of that process and writes it to an imposed directory structure in SQLite blobs in a data store that the parent also owns. You'd lose some key functionality unless you chunked the data a bit. And, you know, mmap() would be impossible to simulate, but nobody ever promised a reliable implementation of that.
It confuses me, because I've seen a handful of these worries expressed, and it seems like a bunch of traditional UNIX-y methods for dealing with these kinds of problems are still open. I don't know what all the rules are, because I haven't really seriously looked at it, but I can think of at least three old UNIX tricks off the top of my head that probably solve this concern acceptably for 80% of apps that have it - you could use named pipes; you could redirect output to a third process like netcat that connects back to a handler process via a domain socket and deals with it; the parent could open a shm segment and maintain a DMA-like sweeper that takes blocks off, puts them into files it owns, and keeps its own little virtual FS.
I know none of those is straight fopen(), but somebody would only have to write it one time. Seems like a classic my-current-filesystem-is-mounted-ro problem. That used to be a pretty everyday occurrence, and any old sys admin has 100 workarounds for it.
And remember before you squawk about chunked byte streams being inserted into blobs: there's 15 years of Oracle Video Server delivering PPV porn to the hotelier masses that says it works fine.
kochbeck | 14 years ago | on: Lout: An alternative to LaTeX?
In college, I was typesetting my work in PlainTeX (I never did like LaTeX, but obviously I had it available) on a 14.77MHz 68000-based Amiga 2000, and the TeX distro came on floppies. I had a whopping 40Mb hard drive, and all the heavy lifting lived there - Metafont, dvips, tex itself. But they fit comfortably on 800K low-density floppies and ran from them, if you needed to. The other floppies were all fonts, and since the prevailing format for fonts in the rest of the world was Type 3 Postscript (yucky bitmap) and comparable TrueType, my work looked rockin.
So to review, I had it running largely off floppies on a machine a couple of orders of magnitude slower (and a couple of orders of magnitude less memory and storage) than my iPhone. And I frequently taught freshman English majors who wouldn't own their own computer for another 3 years how to use it, down to font rendering and selecting an output format for the target printer which was rarely Postscript back then.
Riddle me this then: why are current TeX distros completely indecipherable to me now? I mean, kpathsea was always a bit of a beast, but I understood it pretty much at a glance. How is it that, although I've used the platform on and off for two decades now, in the last 5 years I've had to call the Psychic Friends Network every time I tried to call a package that I thought I had installed correctly? Oh, and why is a whole install now larger than the sum total of all the storage I had at my disposal - every floppy, hard drive, mainframe quota, and gettemp limit - when I last used the system on a daily basis?
As far as I can tell, the last update to the core product was in 2008, and everything that's been added to the main engine since 1992 has been incremental support for things like modern font formats. So it should have grown linearly, not exponentially. But there it is. Big as life and twice as ugly.
This is actually the second question in five days I've seen in two different fora about, "How is TeX holding on?" And to look at the sample output that was produced by Lout, obviously the answer is, "Because no one ever came up with a replacement that produced better output." You don't have to ask Don Knuth to figure that one out. It's not that Lout hasn't surpassed TeX yet. It's that it hasn't beaten troff yet. The 70s called, and they're looking for their DEC LP01, man.
But I don't think people are actually voicing the question in their heads. I think the question they're actually asking is, "Who let this godawful piece of Frankencode run through the village terrorizing the children, and why won't someone please scrape it all into a pile and teach it how to sing Puttin' on the Ritz like it did 20 years ago?" Or, "If you got this thing back into shape, why wouldn't it be the rendering engine for ebooks, because if it's setup right, it can render a whole book from source live on an iPad which is 100x more powerful than its original compile target?" I can think of 20 questions like this. All the questions ultimately boil down to a wonderment that one of the best pieces of software ever written for making readable output is cared for so shoddily. It's like some laboratory experiment gone amuck on how layering bad abstractions on things makes even awesome things awful. }
And now for my next trick, I'm going to go integrate XeTeX into my current product to generate custom typeset results for customers. No, seriously, I am. I see 20 more years of this platform in my future...
kochbeck | 14 years ago | on: SIM Cards Must Die
So if, say, Sprint issues a portability request to Syniverse (the mapping platform provider) so that they can have your number from Verizon, Syniverse puts that request in the next batch. Then the batch gets passed to VZ for evaluation for things like whether you still owe them money. If you're good to go, VZ kills your DN (that's your number and the associated SVC mapping), and your VZ service goes dead. Then they pass your record back to Syniverse who then passes the thumbs-up and the number to Sprint who sets up a new DN to your new service (and presumably your new handset if you're going from one locked-in CDMA net to another).
That's a really watered down, 4am version of what happens. But the upshot of all of it is that if it worked like WiFi SSID switching, every time you switched, you'd probably lose service for awhile. If all things work for the good, the switch can take like 10 minutes. I'm sure they could get it down to 1 or 2. But probably not 0.
Here's the punchline... the SIM card in GSM was specifically designed to OBVIATE the need for all that (also to act as an encryption key, but that got hacked years ago). The SIM is supposed to authenticate you to a particular DN and link you back to a billing record at your primary carrier. The theory was that every carrier would create roaming treaties, and you'd just wander from network to network, oblivious to whose actual network you were on. And your primary carrier would sort it out on the backend. And in many places, it actually pretty much works that way. You can carry 3 or 4 cards and swap carriers and numbers based on the plan you want to use. Because the phones aren't locked to a single carrier's cards.
A good example of this is that in the T-Mobile / AT&T breakup, they came to an agreement to allow cross-network roaming sometime late this year. So if you're a TMo subscriber, but you've got an AT&T signal, even in a TMo service area, you'll just ride AT&T instead.
So essentially the reason it doesn't already work this way is because A) CDMA is so popular in the US, and CDMA really requires the rigorous porting process, and B) the carriers who do support it (AT&T, TMo and Sprint on their now-dwindling GSM net) have been jerks about it for years. It's a business decision, not a technical one.
kochbeck | 14 years ago | on: I need advice on our company's CFO position?
I'm curious what you want this person to do? If you're prepping for an angel or seed round, is there something unusually complex about your business that requires more than a proforma cashflow summary and a cap table? If not, seems like you're overspending for something that you can probably convince someone to do for you for free.
Try looking for an advisor who bangs that sort of thing out or already has one setup for a business like yours. For companies I've advised, it wasn't uncommon for us to spend 3-4 hours putting all that together one time. It's not brain surgery, and you'll be better off having learned how.
kochbeck | 14 years ago | on: Why I'm a Pirate
The guy who wrote that is receiving, at best, nominal returns from criminality along with the satisfaction of making the, "Fuck you, that's why," argument. Crime isn't paying well at all for him, because he's committing a potentially life-altering crime in increments of $0.99 in music. So let's just set all those people aside for a moment, because on an individual basis, that's just a wreck to explain. Would take interpretive dance. These sorts of people only matter economically in the aggregate (think: Bittorrent), but "people-in-the-aggregate" isn't in charge, doesn't steer anything. Real individual human beings are.
So how about some individual human beings who are benefitting mightily from piracy? Somebody must be making out big time. They must have a lot of power and a strong justification for having the system be just so.
And if you took a moment to ask, say, the former CTO of any political campaign, they'd tell you who those people are. But since you didn't ask, I'll just tell you: it's politicians. Heard it here first people: political campaigns PIRATE THEIR ASSES OFF. I know with 100% certainty that one of the sponsoring senators for PIPA won big riding on top of a sea of pirated software in their campaign office. You betcha. One of the sponsors.
In the last decade, when money into campaigns has increased by orders of magnitude, piracy has actually increased on campaigns, many of which can now afford to pay. Why? Laptops. Back when desktops were still king, odds were good that you'd have one or two legal copies of, say, Office that you were installing across all the machines in your phone banks, another couple copies for your volunteer centers, maybe one for your staff offices... all those places where fixed machines were. So at least you were installing at like 5:1. Not legal, but not crazy.
But that's not how it works anymore. Now everybody plays BYOL. Need Office? Sure, there's a copy on Bob's shared drive. Need MapInfo? That's on a fileserver. And everybody at a machine (and I mean everybody) needs basic commercial software to work. Some need even more - the Adobe Suite or Visio or MapInfo or... it just goes on and on. Copies of SPSS floating around. If it's a campaign for an incumbent, you need, at minimum, everything on your desktop in the campaign that the staff on the Hill have, because you're going to be passing lots of files. So incumbents' campaigns tend to get right into piracy real fast, because they need application parity with their official staff.
Multiply that by every staffer and every intern and every volunteer who brings in their laptop and that's a huge number of copies. A successful presidential campaign is probably pirating on the order of at least 3,000+ copies of just Office alone. Seriously. Go audit Romney. They're there.
Funny thing is, it was the artists(!) who ultimately cracked down on the rights management firms that made campaigns stop pirating music. Possibly the one time ASCAP and BMI actually did anything for the artists, and it was against politicians. The deal was, artists were tired of politicians they didn't agree with playing uncleared, public performances of their music. If they hated the guy, they sure didn't want him to also get the music for free. So the rights firms cracked down. Odds are, big campaigns now have a CD of cleared music with usually BMI. They don't do it till they think they're likely to get caught, so they STILL PIRATE THE DAMN MUSIC. But eventually they make good. Want to check that one out? Call the compliance desk at the folding Cain campaign and ask if you can see their BMI clearances. Bet they don't have any - they bowed out too soon to get caught.
Oh, oh! Don't forget TV. A good rapid response operation is capturing all the news in areas in play and all the advertising for themselves and their opponents. Nowadays, there are firms that suck it down, and then they take the files and share them around the office. Much like Pirate Bay in the TV section. "Hey, did you see yesterday's AC360 on the other guy? Here's a copy!" Back when I was doing this crap, TiVo was still pretty much the best you could do on short notice, so I had a shelf of hacked TiVos. Ah, how life has gotten easier.
One more thing. Lists. Copyrighted lists. Mailing lists. Demo data. All the information detritus from campaigning. Stealing lists is a serious no-no. Reason being, the way politicians get rich (if they don't start rich, of course) is their list: because your campaign is not a shareholder-based corporation, the candidate ends up owning the assets. The key asset that gets created is the supporter list. A good list from a very successful national single run can bring in millions. Even for the loser.
So lists are precious. You'd think that somehow there would be an honor code around this, at least. "Thou shalt not screw thy coworkers out of their primary asset." That would, unfortunately, be untrue. Go ask any campaign data manager how they've "salted" their list. They'll tell you. They hide tripwire data in the list - emails that go to warning scripts or phone numbers that forward to their own cell. Because pirating each other's data in politics is also a national tradition.
A few things, some of them quite complex, are at the root of all this. A good example is campaign finance reform where there are matching funds spending caps and such. Piracy is a really good way to keep from moving spent money into, say, Iowa and incrementally lay waste to the cap before you've decided if you're going to get matched. It's a complex set of considerations and public perceptions. There are a lot of little dances that campaigns do, and piracy is a really good way to disappear major expenses in a very cash-constrained environment.
But a very senior Democratic political operative sat me down once when I was trying to convince him to buy legal licenses for an Iowa office. He said, "Dave, here's the deal: if we lose, there's nothing to go after. We'll leave the stage with negative money and nobody to pin it on. If we win, we are the Executive Office of the President, and we've got the Antitrust Division. Do you really think Microsoft, of all companies, is looking to pick that fight?"
tl;dr: Politicians operate vast organizations with questionable legal practices called campaigns. These campaigns get them elected to power and make them rich. Once elected, they legislate against the citizenry doing the things they did to gain power and wealth. This is not a conspiracy. Turns out they're just assholes.