mbellotti | 23 days ago | on: Ask HN: Have top AI research institutions just given up on the idea of safety?
mbellotti's comments
mbellotti | 5 years ago | on: Show HN: Modernizing Legacy Computer Systems
But I'm psyched. This was a two year project collecting stories, researching techniques and historical context, then having Covid push through production schedules into complete disarray.
mbellotti | 6 years ago | on: Ask HN: Was the Y2K crisis real?
mbellotti | 7 years ago | on: The Land Before Binary
mbellotti | 8 years ago | on: Ask HN: Do you know a less distracting Slack alternative?
mbellotti | 8 years ago | on: How a 20-year-old kernel feature helped USDS improve VA’s network
- You're too junior (not your case, I understand.)
- It is not clear that you've actually written software recently as a professional. (While we're looking for senior people and do have management needs, over time we've struggled with hiring managers -vs- growing them because government is such a radically different environment that success managing in the private sector is a poor indicator. So engineering managers are evaluated primarily as engineers first, managers after they pass)
- Your skills seem too specialized in areas we do not have needs. (Much of government technology is super old and much of what is wrong with it is technical debt and decay, not cutting edge technical challenges. For that reason we prefer generalists. Again this doesn't sound like your case)
- Not enough web development work (There is an on-going debate about this, but realistically citizen facing services tend to mean websites and the infrastructure that supports them. In the past we have hired engineers who were unable to adjust to web development and we couldn't find them enough work to play to their strengths. So while we're open to software engineers from other disciplines, there's still a lot of inconsistency in how the engineers judging your resume weigh this issue. Our attempts to correct this are ongoing.)
- You've applied as an engineer and emphasized non-engineering accomplishments (Since we're a civic tech organization sometimes people curate their resumes to play up their social good activities instead of their engineering. This is without a doubt the wrong move. If our engineers don't think you can write code they will not clear you for a technical interview.)
- You've applied for the wrong role or it's not clear what role you would fit into (This seems like it might your case. USDS has three types of roles [well five, but two are not really relevant here]: Engineering, Design (which includes visual, UX research, and content strategy), and Strategy/Operations (which includes both our front office administration and people who are coming in with significant government/policy/legal/product management experience. While we definitely have people who straddle lines [PMs with engineering backgrounds, designers who can program, etc] all those people still applied and were evaluated for one specific community.)
mbellotti | 9 years ago | on: What Happens When You Mix Java with a 1960 IBM Mainframe
mbellotti | 9 years ago | on: What Happens When You Mix Java with a 1960 IBM Mainframe
"How are you going to get hacked if I talk about your mainframe? It's not connected to the public internet, is it?"
"No. Well... we don't know... but ... hackers! Hackers are really smart Marianne."
Part of the compromise was that I promised I would only use information that was already available publicly through government reports and news articles. I went back through my talk and documented where each fact was already published somewhere else until they were comfortable with it. So the ambiguity on whether the 7074 was the actual machine or an emulator was deliberate... there were certain things I could not find a public comment on and therefore agreed to avoid making direct statements about.
This all seems super annoying, but it makes sense when you realize how heavily scrutinized public servants are. In the end they are only trying to protect me, my organization and Obama's legacy. Three things that are really important to me. So I can't exactly blame them for it. I was happy to be able to find a middle ground where they felt comfortable, the organizers weren't too badly inconvenienced and I got to give the talk I wanted to.
mbellotti | 9 years ago | on: What Happens When You Mix Java with a 1960 IBM Mainframe
mbellotti | 9 years ago | on: What Happens When You Mix Java with a 1960 IBM Mainframe
Indeed. The point of the talk was that 1) legacy is often assumed to be bad not for any real technical reasons but just because it is legacy and 2) a lot of what was being presented as legacy wasn't even legacy. Their OS 2200 version was actually newer than the Oracle DB they were using on the "modern" side of the stack.
mbellotti | 9 years ago | on: Staying with the US Digital Service
mbellotti | 12 years ago | on: Maryland Wins the Fall 2013 Hackathon Season
mbellotti | 13 years ago | on: Tailgating YC
For the record, I've always really respected YC for being honest and up front about biases. I find it very refreshing, because most everyone else is too scared of being accused of discrimination to admit it. Everyone has biases. Every VC and angel is going to try to match what's in front of them to something they know when assessing the potential value. I have no doubt that YC will accept a great company that doesn't fit the "type", we spoke to lot of companies like that before we came out to SF.
What I'm curious about is YC's habit of "fund without idea" or "fund for new idea" ... how likely is it that someone against the traditional startup type would be given that opportunity? I don't know the answer because there's no data on what ideas people came into YC with, vs what they left with ... but it's an interesting question to me because cognitive biases are an interesting topic to me.
The people who think they are smart enough to think their way out of their own biases are usually the ones who fall victim to them the most often. So I've always really seen YC's honesty about this as a GOOD sign, not fodder for criticism.
mbellotti | 13 years ago | on: Tailgating YC
Except the odds of you winning anything over pocket change are astronomically low, so most people gamble away not just all their credits, but all their winnings and some of their own money to boot. That's why the casinos are so eager to give you free credits for everything.
What I was doing was playing the penny slots on the lowest bet level and cashing out anything over ten cents. I was making about 60 to 70 cents on the dollar ... which sucks if it's your dollar, but the whole point was it WASN'T. I was using the slots to convert the casinos free credits to cash.
......Not what I would have done if given free choice of activity, but like I said, my friends wouldn't let me play blackjack :D If I have to sit for hours while the bride-to-be brags about her bullshit "system" for hitting jackpots, I might as well make a little money.
mbellotti | 13 years ago | on: Why Marissa Mayer should acquire IFTTT and go all in on Yahoo Pipes
mbellotti | 13 years ago | on: Your Company Needs A Chief Dissent Officer
mbellotti | 13 years ago | on: Twilio's first support hire joins engineering
mbellotti | 13 years ago | on: Ask HN: Sending first e-mail to potential users from "waiting list"
But more to the point, is it worth scraping fb on the off chance that their email is part of the public profile? I would think most of your target audience would have better privacy settings.
mbellotti | 13 years ago | on: Programming Interview Questions Stack Exchange Proposal
mbellotti | 13 years ago | on: Even If You're All-Powerful, It's Hard To Fix The Economy
When I was working in defense technology I had two questions for engineers when we talked about Safety:
1) Can the operator assess the risk of using this technology? 2) If something goes wrong during operations, can the operator mitigate the risk?
The degree to which either of those statements is true is a measure of how safe that technology is. Technology that is simple to understand and executes deterministically every single time and where it is obvious if it is malfunctioning and the operator has enough time to either correct it or stop it, is generally perceived as safe. Technology that hides what it is really doing, confuses the operator about what the effects of operating it might be, and either executes faster than the operator can respond or specifically prevents the operator from responding, is more likely to trigger negative safety outcomes.
The problem the AI industry faces is that tricking the operator into thinking the technology is doing something it is not is explicitly part of their business model. Read any of the mentioned authors (Dekker is probably the best starting point) and it will become obvious why AI Safety is impossible when AI is dependent on pretending to "think" and "reason". In order to be safe they would have to abandon that. If they abandon that, they will be unable to raise the capital they need to keep the bubble from bursting. The technology will survive, maybe with another AI winter, but many of the businesses will not.
So they will abandon the lip service about Safety instead, but then that was never real Safety to begin with. Real Safety is not about zero risk. It is just as impossible to have zero risk as it is to have 100% uptime. Real Safety is about how the technology is designed to manage risk as part of an overall system.