Several years before it became fashionable to dismiss everything as a Makov chain.
Given a simple history can be mapped into a higher dimensional state, Markov chains are much more common than they first seem, so it's basically* always possible to dismiss any physically implementable system as "a Markov chain" if you're so inclined.
* While I wouldn't be surprised if someone has come up with laws of physics that can't be described by a Markov chain, mere quantum mechanics can.
Quantum mechanics can be described as a Markov chain? That seems plausible but I haven't worked with MCs enough to see exactly how. Could you please elaborate? It seems interesting.
Still a misnomer in my opinion, but I noticed that this part of the algorithm was missing from all the articles that followed (mine included). People are basically implementing sudoku solvers :)
Since people started using marketing tactics to promote themselves. WFC is a $100 name for a $1 concept. Other entries in the tech hall of shame are mersenne twister and dependency injection
>There are two main complaints from academic community concerning this work, the first one is about "reinventing and ignoring previous ideas", the second one is about "improper naming and popularizing", as shown in some debates in 2008 and 2015.[33] In particular, it was pointed out in a letter[34] to the editor of IEEE Transactions on Neural Networks that the idea of using a hidden layer connected to the inputs by random untrained weights was already suggested in the original papers on RBF networks in the late 1980s; Guang-Bin Huang replied by pointing out subtle differences.[35] In a 2015 paper,[1] Huang responded to complaints about his invention of the name ELM for already-existing methods, complaining of "very negative and unhelpful comments on ELM in neither academic nor professional manner due to various reasons and intentions" and an "irresponsible anonymous attack which intends to destroy harmony research environment", arguing that his work "provides a unifying learning platform" for various types of neural nets,[1] including hierarchical structured ELM.[28] In 2015, Huang also gave a formal rebuttal to what he considered as "malign and attack."[36] Recent research replaces the random weights with constrained random weights.[6][37]
But at least it's easier to say, rolls off the tongue smoothly, and makes better click bait for awesome blog postings!
I also love how the cool buzzwords "Reservoir Computing" and "Liquid State Machines" sounds like such deep stuff.
Yukio-Pegio Gunji, Yuta Nishiyama. Department of Earth and Planetary Sciences, Kobe University, Kobe 657-8501, Japan.
Andrew Adamatzky. Unconventional Computing Centre. University of the West of England, Bristol, United Kingdom.
Abstract
Soldier crabs Mictyris guinotae exhibit pronounced swarming behavior. Swarms of the crabs are tolerant of perturbations. In computer models and laboratory experiments we demonstrate that swarms of soldier crabs can implement logical gates when placed in a geometrically constrained environment.
It splits an image to cells by using convolutions, derives a set of constraints of how cells can be combined and then generates combinations that satisfy the constraints. It's a form of machine learning based on combinatorial optimisation, really.
Far as I can tell it doesn't apply any Markov assumptions anywhere, but I might just not have noticed it so please prove me wrong on that one.
ben_w|1 year ago
Given a simple history can be mapped into a higher dimensional state, Markov chains are much more common than they first seem, so it's basically* always possible to dismiss any physically implementable system as "a Markov chain" if you're so inclined.
* While I wouldn't be surprised if someone has come up with laws of physics that can't be described by a Markov chain, mere quantum mechanics can.
jampekka|1 year ago
wasabi991011|1 year ago
linkdd|1 year ago
Still a misnomer in my opinion, but I noticed that this part of the algorithm was missing from all the articles that followed (mine included). People are basically implementing sudoku solvers :)
on_the_train|1 year ago
DonHopkins|1 year ago
https://en.wikipedia.org/wiki/Extreme_learning_machine#Contr...
>Controversy
>There are two main complaints from academic community concerning this work, the first one is about "reinventing and ignoring previous ideas", the second one is about "improper naming and popularizing", as shown in some debates in 2008 and 2015.[33] In particular, it was pointed out in a letter[34] to the editor of IEEE Transactions on Neural Networks that the idea of using a hidden layer connected to the inputs by random untrained weights was already suggested in the original papers on RBF networks in the late 1980s; Guang-Bin Huang replied by pointing out subtle differences.[35] In a 2015 paper,[1] Huang responded to complaints about his invention of the name ELM for already-existing methods, complaining of "very negative and unhelpful comments on ELM in neither academic nor professional manner due to various reasons and intentions" and an "irresponsible anonymous attack which intends to destroy harmony research environment", arguing that his work "provides a unifying learning platform" for various types of neural nets,[1] including hierarchical structured ELM.[28] In 2015, Huang also gave a formal rebuttal to what he considered as "malign and attack."[36] Recent research replaces the random weights with constrained random weights.[6][37]
But at least it's easier to say, rolls off the tongue smoothly, and makes better click bait for awesome blog postings!
I also love how the cool buzzwords "Reservoir Computing" and "Liquid State Machines" sounds like such deep stuff.
https://news.ycombinator.com/item?id=40903302
>"I'll tell you why it's not a scam, in my opinion: Tide goes in, tide goes out, never a miscommunication." -Bill O'Reilly
How about rebranding WFC as "Extreme Liquid Quantum Sudoko Machines"? ;)
Then there's "Crab Computing"!
https://news.ycombinator.com/item?id=42701560
[...] If billiard balls aren't creepy enough for you, live soldier crabs of the species Mictyris guinotae can be used in place of the billiard balls.
https://www.newscientist.com/blogs/onepercent/2012/04/resear...
https://www.wired.com/2012/04/soldier-crabs/
http://www.complex-systems.com/abstracts/v20_i02_a02.html
Robust Soldier Crab Ball Gate
Yukio-Pegio Gunji, Yuta Nishiyama. Department of Earth and Planetary Sciences, Kobe University, Kobe 657-8501, Japan.
Andrew Adamatzky. Unconventional Computing Centre. University of the West of England, Bristol, United Kingdom.
Abstract
Soldier crabs Mictyris guinotae exhibit pronounced swarming behavior. Swarms of the crabs are tolerant of perturbations. In computer models and laboratory experiments we demonstrate that swarms of soldier crabs can implement logical gates when placed in a geometrically constrained environment.
https://www.futilitycloset.com/2017/02/26/crab-computing/
rererereferred|1 year ago
YeGoblynQueenne|1 year ago
https://escholarship.org/uc/item/1fb9k44q
It splits an image to cells by using convolutions, derives a set of constraints of how cells can be combined and then generates combinations that satisfy the constraints. It's a form of machine learning based on combinatorial optimisation, really.
Far as I can tell it doesn't apply any Markov assumptions anywhere, but I might just not have noticed it so please prove me wrong on that one.