dhj's comments

dhj | 8 years ago | on: Show HN: TOTP Based Port Knocking -- two factor firewall

> If you're getting lots of random IPs

That is the problem, it always seems to be random IPs. Thats why failtoban is a losing battle. Failtoban works per IP, but no matter how sensitive the ban rule there always seems to be an endless supply of new IPs.

I do use keys for ssh access so disabling passwords does cover most of the safety concern. I guess it is more of an annoyance than anything. It looks huge in the logs, but network usage wise it probably boils down to once every few minutes.

dhj | 8 years ago | on: Show HN: TOTP Based Port Knocking -- two factor firewall

Thank you. I will definitely try the key only. That should reduce probing. Presence will still be there, but if it completely deters probing that will be good enough. I looked at fail2ban, but it seemed like a losing battle with botnet scans.

Thank you for your feedback!

dhj | 8 years ago | on: Show HN: TOTP Based Port Knocking -- two factor firewall

Author here. Just wanted to get some feedback on this. It's a simple proof of concept, not meant for production code. I found several similar techniques, but none exactly the same.

Short version:

Client and server share a key.

Each can generate a TOTP based hash.

Client sends hash to server listening on UDP.

If hashes match the server opens the normally closed ssh port for a few seconds (long enough to make a connection).

Like a Google Authenticator TOTP code, the correct hash changes every T seconds so identification of the UDP port and interception of the key is only helpful for a limited (if any) amount of time.

Is this worth turning into a robust daemon? Is there a better way to deal with constant ssh probing? A module in a firewall would be ideal. Environment based config would make it fairly easy to use in provisioning for ssh admin with a smaller scan footprint.

dhj | 9 years ago | on: Luxury Music Festival Turns Out to Be Half-Built Scene of Chaos

Birmingham, AL (US) must be really lucky with their festival scene. They have several significant festivals every year:

Sidewalk Film Festival

Sloss Fest

Brew Fest

Slice Fest

Secret Stages

I'm not sure if it is the local support or that they're just smaller. I expect we may have had a lot of excellent event organizers with solid experience looking for new jobs after City Stages wound down. City Stages was a 20+ year music festival that was generally successful logistically, but wasn't profitable.

When you think of it, though. If there are 5+ 10-20k attendance festivals every year in every metro area over 500k there are bound to be some regular screwups.

And port-o-potties always suck.

dhj | 10 years ago | on: Secondary shops flooded with unicorn sellers

If the company is private then employees exercising options generally (always?) have clauses that restrict them from selling that stock whether they are accredited investors or not. Not a lawyer, but I think there may be two reasons: 1) prevent covert takeover from the original founders 2) prevent a general market for private companies (before IPO). IPO involves a lot of regulatory overhead to confirm full disclosure. I think the restriction is to maintain control, but it may be a legal requirement before public market.

In other words you can't sell except as part of a board approved sale of the company or a public offering. I think opportunities to buy are based on new issuance of stock (for accredited investors) not based on trades of existing stock.

dhj | 10 years ago | on: U.S. Supreme Court Justice Antonin Scalia has died

> whether people retain their rights when they coordinate as a corporation.

You are incorrect. Citizens United was decided based on the notion of corporate personhood -- the notion that corporations themselves have rights as if they are a person. There are very succinct and upheld limitations on individual monetary contribution to campaigns.

However CU broke that by giving people the ability to launder political money through a corporation.

Also, most non-profits (those 501c3s that want tax exemption) can not do any sort of campaigning. Those that do are subject to taxes.

CU said specifically that corporations are people that can "say" (aka spend) whatever they want to get their message across. People can make individual donations to support this effort essentially getting around existing campaign restrictions.

Money does not equal speech and there was a good reason monetary donations were restricted. By removing the restrictions they have reduced the ability of the average person to be heard because they now have to buy a bigger megaphone than the billionaires.

You really do need to read up on corporate personhood and election law. Let me guess... FOX News fan?

dhj | 10 years ago | on: Chart Shows Who Marries CEOs, Doctors, Chefs and Janitors

With the adjustments in place to highlight same sex couples +relatively unique combinations of professions -- I'd love to see someone speak up with HEY! That's me and my spouse! I wish they had the numbers on those. Actually numbers on all of them would be interesting (raw, % population, % couples, etc).

dhj | 10 years ago | on: How Long Before Superintelligence? (1997)

I agree that some form of evolutionary algorithm will be our path to intelligent software (or a component of it). However, as genetic algorithms are currently implemented I would say the following analogy holds neural_net:brain::evolutionary_algorithm:evolution ...

In other words GAs/EAs are a simplistic and minimal scratching of the surface compared to the complexity we see in nature. The problem is two fold: 1) we guide the evolution with specific artificial goals (get a high score for instance) 2) the ideal "DNA" of a genetic algorithm is undefined.

In evolution we know post-hoc that DNA is at least good enough (if not ideal) for the building blocks. However, we have had very little success with identifying the DNA for genetic algorithms. If we make it commands or function sets we end up with divergence (results get worse or stay the same per iteration rather than better). The most successful GAs are where the DNA components have been customized to a specific problem domain.

Regarding the target goal selection that is a major field of study itself with reinforcement learning. What is the best way to identify reward? In nature it is simple -- survival. In the computer it is artificial in some way. Survival is an attribute or dynamic interaction selected by the programmer.

I believe that multiple algorithmic techniques will come together in a final solution (GA, NN, SVM, MCMC, kmeans, etc). So GA is still part of a large and difficult algorithmic challenge rather than a well defined solution. The algorithmic challenge is definitely non-exponential -- there are breakthroughs that could happen next year or in 100 years.

The bandwidth issue is the main reason I would put AGI at 2045-2065 (closer to 2065), but with the algorithmic issue I would put it post 2065 (in other words, far enough out that 50 years from now it could still be 50 years out). Regardless of the timeframe, it is a fascinating subject and I do think we will get there eventually, but I wouldn't put the algorithmic level closer than 50 years out until we get a good dog, mouse or even worm (c.elegans) level of intelligence programmed in software or robots.

dhj | 10 years ago | on: How Long Before Superintelligence? (1997)

Good question. In 2013 they hit 0.5 pflops (0.5*10^15) by putting together 26,496 cores in one of their data centers. So I expect they have scaled proportionally and would be around 1-1.5 pflops. That would put them at #50-80 on top500.org. Bandwidth wise they are probably at 10-50 gigabit/s which is where 10G ethernet is and Infiniband FDR starts -- a lot of systems in that range use those technologies for communications (with custom and higher bandwidth options in the top 10).

Current Top500: http://top500.org/list/2015/11/

Amazon 2013: http://arstechnica.com/information-technology/2013/11/amazon...

EDIT: As far as a whole data center is concerned, i'm not sure it would be a direct comparison as bandwidth would not be as high between cabinets. Amazon using their off the shelf tech to make a supercomputer is probably a better indication of how they compare. Of course at 26,496 cores that may be a data center!

dhj | 10 years ago | on: How Long Before Superintelligence? (1997)

Off by a few orders of magnitude. Tihane-2 hit 33 pflops or 3.3*10^16 flops or approx 1/3 of the upper bound. Brain simulation is a snag, but it isn't our only snag.

Like you said, it's a general algorithm issue. We do not remotely understand the brain well enough to simulate it. We have very little idea of what an intelligent algorithm (other than brain sim) would look like.

Also, all of these estimates are based on flops and none of them consider bandwidth. We are a few orders of magnitude lower in gigabits/s than we are in flops. I personally think that is where the bottleneck is. 100 billion neurons with a 100 gigabit/second pipe could interact once per second and then only at the level of a toggle switch. Granted not all neurons have to interact with one another, but we are significantly behind in bandwidth and structural organization.

Bandwidth is intimately tied to processing capacity. I dont think the bandwidth will be there until 2045-2065 and like you say we have serious software/algorithm/understanding deficiencies to resolve before then. I would be very surprised if we get general AI before 2065 if ever. I do not expect it in my lifetime and would be pleasantly surprised if it happened.

dhj | 10 years ago | on: AI is transforming Google search – the rest of the web is next

I believe it considering 2006-2008 was when all the deep learning pieces came together (some parts were decades old, some 5 years, some 2 years). Google's main push in ML is with deep learning. Although, I would like to see the source too. Tried to find it using Google, but no luck! :)

dhj | 10 years ago | on: Graphene optical lens 200 nm thick breaks the diffraction limit

And just about all of them have higher fees if you want it published under a creative commons (open) license. Note in GP's quote: the researcher can choose CC for a fee. This option has come available in nearly all peer review journals recently due to recent US grant requirements to open published results. They still have to go through a rigorous peer review. Frankly, GP's mate doesn't know what he's talking about.
page 1