LionessLover's comments

LionessLover | 9 years ago | on: Why Great Entrepreneurs Are Older Than You Think (2014)

Do turn your adblocker on for that site. I had at some point turned it off to be nice to them - but what I got this time was too much for me. Flashing everywhere, a third of my browser window for ads, and then an ad popup floating across the text that I had to manually click to close.

LionessLover | 9 years ago | on: M 122: Advanced Operating Systems (2015)

If you are interested in RTOS (real-time OS) courses I recommend checking edX for the expected September 2016 arrival date of a followup to this course:

edX course page: https://www.edx.org/course/embedded-systems-shape-world-utau...

More information: http://edx-org-utaustinx.s3.amazonaws.com/UT601x/index.html

The excellent quality of the above course - which includes programming actual hardware (you have to invest about $50 for components) - raises the expectations for that upcoming course.

.

EDIT

The page is already up for the new course "Real-Time Bluetooth Networks - Shape the World":

https://www.edx.org/course/real-time-bluetooth-networks-shap...

> In this lab-based computer science course, explore the complexities of embedded systems and learn how to develop your own real-time operating system (RTOS) by building a personal fitness device with Bluetooth connectivity (BLE).

- Enhance your embedded system skills

- Write your own real-time operating system

- Design, develop and debug C code

- Implement a personal fitness device

- Communicate using Bluetooth

More info: http://edx-org-utaustinx.s3.amazonaws.com/UT601x/RTOS.html

LionessLover | 9 years ago | on: China issues demolition order on world’s largest religious town in Tibet

Yep. Human brains are made to see faces in clouds - and patterns everywhere. Because its the only way for that little brain to function. In reality nothing is the same unless it's identical (not a copy - but the exact same thing, maybe seen from different angles or at different times). We like our clever analogies, and they serve a purpose, but even when making them it's best to be aware that it's a product of our brain and to always be ready to question if it actually serves the intended purpose. Even if you can use a specific analogy in one context doesn't mean it's useful in another one. I think it's okay to make such analogies - as long as everybody including the person making them is aware of the shortcomings and that being able to make one is a very, very low threshold, given that it comes from brains that see animals and human faces in floating water vapor.

LionessLover | 9 years ago | on: Computer model matches humans at predicting how objects move

> What I'm talking about is extremely elementary

Whenever someone doesn't have an argument they resort to such empty phrases, merely repeating over and over "I am right!". Maybe you should have studied some neuroscience LIKE I DID, than you would not be left without arguments in discussions about neuroscience.

> plus the actions of muscles all take time!

And yet there is no "prediction". As has been pointed out to you by a lot of people including myself repeatedly. Cognitive dissonance is strong in "jamesrcole".

LionessLover | 9 years ago | on: Serverless Architectures

At the bottom of the article:

> This is an evolving publication, and I shall be extending it over the coming days and weeks to cover more topics on serverless architecture including some things commonly confused with serverless, and the benefits and drawbacks of this approach.

You can send a tweet to the author: https://twitter.com/mikebroberts

LionessLover | 9 years ago | on: ECMAScript 2016 Approved

Which is wrong. As is the use of "absolutely" - to show that you have the universe-opinion. What hubris. And it does not matter that some guy "himself" posted some opinion either. Right were you link to there are different opinions from other people that show more thought was put into them. The "pg" comment actually is the only one for your argument, all others are against!

LionessLover | 9 years ago | on: ECMAScript 2016 Approved

> pg wrote the site

What does ownership have to do with it, you fucking moron?

That does not make him more right than any other human being. Ownership means one can impose ones will, it does not mean you are omniscient.

LionessLover | 9 years ago | on: ECMAScript 2016 Approved

> pg wrote the site

What does ownership have to do with it?

That does not make him more right than any other human being.

Ownership means one can impose ones will, it does not mean you are omniscient.

LionessLover | 9 years ago | on: Computer model matches humans at predicting how objects move

But the point is that it does not do that. A neural network does not work like a computer. It does not have to predict. It is a parallel flow from input to output AT ONCE. There is no "processing" like in a CPU where it takes n amount of CPU cycles and then the result is sent on. And as I said, it uses a proxy - it does not try to predict anything, it uses the data it has at that moment and nothing else. Before you get mad at me, do take some neuroscience courses please. I'm an IT guy myself and it opened a completely new world for me. Arguing with someone who only sees one side is frustrating. And while I'm not good enough to be able to explain the neuroscience - maybe not at all, definitely not in a forum comment - I still know a little bit about the subject. "Prediction" and "Looking ahead" may be system outcomes, but it does not actually happen as part of the actual low-level process. Not for the low-level processes like catching a flying object, I'm not talking about conscious thought processes.

When a moving object leads to input from different retinal ganglion cells - always in the form of action potential frequencies (so, an analog signal despite an action potential being all-or-nothing, just an aside) through temporal summation timing differences - which can be a function of the speed the object is moving in the real world - can lead to different subsequent processing neurons being activated, eventually leading to different motor neurons being activated or the same ones firing at different rates. So the computation takes place with the signal flowing as a "wave" across brain regions, but it all takes place at once. There is no "let's calculate where this is going to be in a second". This is implicit by connecting input directly to output through paths that change in subtle ways depending on said input. Yes, the end result (system outcome) is a "prediction", but not in the same way as a computer would do it.

It just "happens", there is no actual effort to predict anything. There also is no representation of such a "prediction" anywhere else: It flows right into your movement, but as somebody else has already pointed out just because you manage to catch the ball doesn't mean you are any good at consciously being able to make actual predictions.

By the way, the processing already starts in the retina, which consists of several layers of cells, and the ganglion cells that communicate with the visual cortex at the very back of the head (after being relayed through the geniculate nucleus of the thalamus in the middle of the head). They don't provide a signal like a camera CPU gets from an RGB chip which simply 1:1 sends pixel values. You have cells signaling movement from left to right, others from right to left, etc., coming from the retina.

I think the main point is that the entire process in a neural network is completely different from how a computer operates. When we name outcomes we may be tricked into thinking it's similar, but when we look at how the output is generated it is a completely different world. That does matter, it has implications for how we think about the whole thing, what we think we can achieve, and how.

If you did this in a computer, imagine not using any storage - not even CPU cache. All data must be processed at once, there are no buffers, not even on an "input pin". You have a stream of data and all you can do is decide where to move it next. It's a horrible analogy but the best I can do right now. Oh, and you don't have a system clock signal, the data is the clock signal. And you don't do any calculations either as a microchip performs them, instead you rely on analog processing: temporal and spacial distribution of the electrical signal matter. For example, if you send a lot of small signals, since they are all actually ions entering the cell (the dendrites of a neuron) it takes time to throw them out again, and if before the ion transporters manage to do that a new signal arrives with more and more of them the amount of ions increases, possibly until reaching threshold (for action potential firing). Same over space: On a dendrite there are many synapses over its length, connected to different neurons (their axons). The charges (ions) can equally build up over space, not just time. So length of wiring matters as well as the shape of the electrical signal - two things we don't want to see having any influence in our microchips. So computation in a chip and in a neural network is vastly different. Computation in the network happens "on the fly" simply by the movement of the signal through the network, encoded as the frequency of an all-or-nothing signal (action potential), but then every analog trick there is is used to decide if and when an action potential fires in connected cells. Actually "storing" values happens over a longer period by changing the connections: New synaptic connections form all the time and existing ones disappear, and existing synapses change ion channel and ion transport channel densities. That is way too slow to have an impact for any given computation, so it plays no roll for trying to catch the ball that's in the air right now.

LionessLover | 9 years ago | on: Serverless Architectures

At the bottom of the article:

> This is an evolving publication, and I shall be extending it over the coming days and weeks to cover more topics on serverless architecture including some things commonly confused with serverless, and the benefits and drawbacks of this approach.

You can send a tweet to the author: https://twitter.com/mikebroberts

LionessLover | 9 years ago | on: Serverless Architectures

Terms such as "serverless" don't have a fixed sharp-edged meaning but are very fuzzy categories, except on a per-person basis, where you may (will) find people insisting on a specific meaning. When this story was submitted on some reddit forum (forgot which one) everybody was up in arms after just reading the headline, screaming "everything is a server" and "there is no cloud"! Don't take it too seriously. Human language is very deliberately a very flexible tool, with the same word fitting into very different contexts, taking on very different roles. Even in science, by the way.

LionessLover | 9 years ago | on: ECMAScript 2016 Approved

Which is wrong. As is the use of "absolutely" - to show that you have the universe-opinion. What hubris. And it does not matter that some guy "himself" posted some opinion either. Right were you link to there are different opinions from other people that show more thought was put into them. The "pg" comment actually is the only one for your argument, all others are against!

LionessLover | 9 years ago | on: ECMAScript 2016 Approved

Because the system JS uses is an ISO standard, it is a problem not just for JS, it works for the majority of use cases, and implementing a second number system (the old one will have to be kept forever) increases cost and complexity significantly without showing enough benefits, since those who need something different can do so.

For example, if you do "money-math" you could just use only integers (use cents instead of dollars) - your number will be an integer as long as it's a whole number and you re,main below Number.MAX_SAFE_INTEGER (http://www.2ality.com/2013/10/safe-integers.html). That's not enough for big-finance math where fractions of cents matter, but for most such applications it is.

LionessLover | 9 years ago | on: Computer model matches humans at predicting how objects move

Depends on what you mean by "predict". Also, unfortunately I don't even remember where I saw or heard this in a lecture, but when studying neuroscience I remember to have seen exactly this question and an example showing that you actually don't have to make predictions. I also forgot the explanation that showed how the (real, biological) neural network solves just such a problem without having to make a prediction. I only have the fuzziest memory of it being a process and at no point was there any prediction of the path of the object being tracked. It was just matching several sensory signal inputs and creating outputs, something clever, using an indirect approach. "Predicting" would be observing the object for x amount of time, doing a calculation where it will be some time later, using a model to come up with a way to intercept, then creating outputs, all of that in a loop, something like that. In any case, the way the neural network actually solved it was completely different from how an engineer would do it. In a sense, the neural network was "cheating" and doing far less work than you would expect.

The one thing I do remember for sure was there was no "prediction" involved - none at all. Unless you argue backwards and say because it succeeded you declare the process a "prediction". Once explained the whole process was actually quite primitive. Again, that was research on an actual biological neural network.

Darn, now I wish I had paid more attention. Any actual neuroscientists here? Without the details even I myself can't see my own comment as a satisfactory reply, but only as a step to actually getting one from somewhere or someone else. But note that it depends on what you mean by "prediction" - as I said, if you define it backwards from success than sure, prediction happened. My point is that the process is very different from how a human-made algorithm would do it.

LionessLover | 9 years ago | on: What Happened to All 53 of Marissa Mayer's Yahoo Acquisitions

> What blows my mind is how positive the HN crowd were about the acquisition of Tumblr at the time:

Self-selection bias. Those who post in a given topic are not always the same people.

What blows MY mind is how it is possible to wonder how in huge groups of people there can form sub-groups (through self-selection, in this case) who hold very different views. Yes, that happens. HN isn't a person.

LionessLover | 9 years ago | on: What Google Learned from Its Quest to Build the Perfect Team

While reading that article the page changed its font-size by at least 150% five times (thus far). No I did not press any buttons. There is some detection Javascript running that gets confused. Maybe my touch screen (laptop)? Page zoom does not change, CTRL-0 does not reset the size, so it's not me.

As for the article... I'm amazed this is so popular (given the attention previous submissions here already got). Well, I guess it's nice to have a link to point to for all the things that do not matter.

This sentence scares me:

> Rozovsky and her colleagues had figured out which norms were most critical. Now they had to find a way to make communication and empathy — the building blocks of forging real connections — into an algorithm they could easily scale.

And this is a surprise:

> ‘‘By putting things like empathy and sensitivity into charts and data reports, it makes them easier to talk about,’’ Sakaguchi told me. ‘‘It’s easier to talk about our feelings when we can point to a number.’’

and

> And thanks to Project Aristotle, she now had a vocabulary for explaining to herself what she was feeling and why it was important. She had graphs and charts telling her that she shouldn’t just let it go.

Really? I have to crunch some numbers how I feel about this.

.

.

PS: figured it out: When I just touch the screen the font-size changes. There are three steps. It takes a double-click with the mouse - but only a single touch with the finger on the touchscreen. I can't imagine this being useful anywhere - especially since when you have a touchscreen you can also already do a two-finger zoom if you want to.

LionessLover | 9 years ago | on: “autocomplete=off is ignored on non-login input elements”

Quoting from the actual Google reply:

> We don't just ignore the autocomplete attribute, however. In the WHATWG standard, we defined a series of new autocomplete values that developers can use to better inform the browser about what a particular field is, and we encourage developers to use those types. [2]

> In cases where you really want to disable autofill, our suggestion at this point is to utilize the autocomplete attribute to give valid, semantic meaning to your fields. If we encounter an autocomplete attribute that we don't recognize, we won't try and fill it.

> As an example, if you have an address input field in your CRM tool that you don't want Chrome to Autofill, you can give it semantic meaning that makes sense relative to what you're asking for: e.g. autocomplete="new-user-street-address". If Chrome encounters that, it won't try and autofill the field.

If you would read what they actually wrote you would notice that - as usual - the headline does not even remotely catch the complexity of the problem.

page 1