Looking at a 10 year window. There's a low probability, but non-zero, that I'll be able to correctly align and distribute DOM elements both horizontally and vertically. I might accomplish this by dropping support for the Internet Explorer family of browsers.
> I might accomplish this by dropping support for the Internet Explorer family of browsers.
I was just working on some legacy stuff last week. Only worked in IE11, in compatibility mode. I had to use vanilla JS to get some data out of a huge form.
Yeah, document.querySelector('#app') doesn't work. I found out, "It only works on modern browsers" which IE11 doesn't qualify as, which is kind of funny to me. I ended up using document.getElementById('app') instead though, which did work.
Still, our tools will need us to pick one screen size and only be able to handle either smartphone, tablet, or desktop resolution with such grace. Magically handling all 3 will take a bit longer.
In the field of education, I believe we're not that far off from having basic teachable AI agents.
As the maxim goes, "The best way to learn is to teach". My vision here is that for any new topic a student learns, they (or the instructor) would be able to instantiate an AI agent with relevant preliminary knowledge, for the student to practice on. The student would try to teach the agent facts and/or how to perform basic tasks, and the agent, with some basic metacognition would be able to query the student regarding any unclear or conflicting points.
It definitely won't be anywhere near Turing Test level in 5 years, but I believe that by then we'll have something useful. And beyond that, I think there's real potential here, both for revolutionising education, and further down the line in terms of AGI.
In personalized AR education, there's an opportunity of peer AI agents. Companions and colleagues.
Ones with very non-human in/capability profiles. How to handle expectation management, especially with younger students, is a challenge. Visibly-malfunctioning cartoonish Sparky, the flaky robot dog?... If it misunderstands or forgets what you said, or is emotionally blind, well that's no surprise, it's broken.
I am really excited about this possibility! I am imaging having AI augment our ability to read and conduct reading comprehension, would be amazing for kids who may be weak in certain areas to get the feedback and help they need. Thanks for sharing the link!
A simple BOT that a student could ask for advice "I don't understand Python WHILE loops" and it pointed them to a YouTube video coupled with some easy examples.
In about 10 years, it will be possible to run a cell lysate sample through a mass spec machine and get what a present-day scientist would call a pretty good picture of everything going on at the proteomic/metabolomic level, in perhaps ~5 hours.
But in that time we will have probably discovered several currently-unappreciated, biologically relevant biochemical mechanisms which we can’t efficiently probe like this. And also it will be considered next to useless because it doesn’t work on single-cell samples. :)
Years ago, I was at a biology outreach-to-the-humanities seminar. Someone asked about the driver of these decades of progress. They were seemingly fishing for some Kuhnian escape from old oppressively restraining dogmas. And were visibly unhappy with the answer that it was mostly economics. Rapidly advancing technology changing the set of questions we can afford to pursue.
Enlightenment will be achieved when we develop One True Platform which marries front end and backend development so that we don’t need to maintain complex frameworks whose sole reason for existence is to make sure the computers don’t blow up passing data back and forth.
Now that the client is the MVC execution environment with the client-server interaction used mostly for data replication, plus some extra invokable server-side behaviours written in isomorphic code, we can congratulate ourselves on having more-or-less reinvented Lotus Notes.
I'm not even convinced current data/networking conventions are optimal. Something will come out, probably not before 2025, that will make us think of traditional network requests (and definitely the DOM) as archaic.
In the RF / wireless world, in the coming 5-10 years we may finally see commercial use of large phased arrays. This was supposed to come with mm-wave 5G, but probably will get pushed to 5G++ or 6G. Starlink actually looks on track to be the first consumer 'killer app'.
Hard to understate how big this will be for maintaining growth in wireless capacity. I think there's a timeline where we ditch copper coax and even buried fiber in most infrastructure.
I started working in RPA (robotic process automation) and what our team is working on will end up putting a lot of people out of work. In 3-5 years, I would expect most data entry jobs to be eliminated by the scripts our team is writing.
The scary part? I work in the health care industry. This means a lot of the heavy forms processing that goes on will soon be completely automated by robots and without any human interaction or decision making. The future is cold and calculating; without any empathy or consideration for the patient - only the bottom line for the provider is what matters.
Nearly every day I get the statistics of how many jobs our team's bots are replacing. In one instance, we had several bots that effectively replaced over 1,000 FTE's and saved the company close to $3 million. We have over 600 bots right now running which is in the top 1% of all companies in the country and they're looking to expand that number even more.
Nearly every day I feel the moral weight of what I'm doing and it gives me pause.
I'm a project manager. Putting aside the possible/not possible speculation for a moment, what I hope to see in the near future is a kind of support system that could evaluate work in progress, work done and other factors to better help us with risk management. Schedules are tighter than ever, deadlines are seen as life or death, penalties for delays are easily surpassing the million USD marks. So I think we need all the intelligence we can get to understand what's really going on in project and make the right decisions to minimize/eliminate risk.
I’m not sure of your industry, so deadlines could be really important there, but almost every programming job i’ve had the deadlines were frequently just self imposed hysteria. Management would make a big deal, but when you dug into what the actual consequences of a missed deadline were it was usually minor. I actually think the culture of tracking everything can make things worse because when everyone is at full capacity theres zero slack for process optimization or serendipity or experimentation. Its like the more overly “busy” everyone is in a company, the more collective intelligence, empathy, and perspective is lost.
It also is essentially lost on almost all management that coding is almost always a creative endeavor. After all if the thing you’re building already exists you could just buy it. Creative works are extremely hard to estimate in any meaningful way.
I think the problem is that there isn't a higher level summary that can tell you when a piece of a project will deliver. The detail of that delay matters.
To give an example: I can't say it takes 1 year to build a widget so this widget will be built in a year. It matters that Max is working on a piece of code but will be out in May and his work won't be sufficiently documented so Shannon won't be able to make enough progress on this particular bug pushing out the project two weeks. The particulars of a particular project matter all the way down.
The core issue with evaluating progress is how to actually measure progress.
More often than not, progress is measured in time wasted rather than value created.
Not only is it very difficult to almost impossible to estimate work items on a time scale larger than a few weeks but this also gives rise to the wrong assumption that time spent equals progress made, not to speak of the wrong incentives a time-based approach to work estimation brings about.
Hence, if you want to assess work in progress and risk you need to define what the terms "progress" and "risk" actually mean for your projects first.
Not sure about 3-5 years but within the next 10 years we'll likely have 3D sensors that can see 100s of meters in daylight, resolve to sub-centimetres at megapixel resolutions with 30+ fps. At the price and form factor of an entry-level DSLR.
In 15 to 20 years every smartphone (or perhaps pair of AR glasses) will have one of these.
> In 15 to 20 years every smartphone [...] will have [3D sensors that can see 100s of meters in daylight, resolve to sub-centimetres at megapixel resolutions with 30+ fps].
I'm a total layperson with cameras and had an abiding sense that there is a fairly hard limitation on what's possible within a smartphone-type housing (because the sensor is limited by the amount of light). I'd love to hear more about how this perceived physical barrier is being overcome!
Gathering actually reliable data from the healthcare system will enable the rise of true clinical science and the fall of the clinician-researcher star system.
I think we will start seeing more and more advanced composite 'metamaterials' being applied in the world outside of research labs.
These are materials with engineered structures at usually the nano or micro scale that have unique/unusual properties. Things like better antennas, imaging devices, or even materials that can perform computations.
As the manufacturing processes develop more I think we will start seeing them more widespread. Defence industries are in particular interested in this at the moment but the potentials are much bigger.
I'm looking forward to the consumer release next year of 1080p AR glasses. And hope one of them has sufficient visual quality and pragmatics to displace a lot of my laptop screen use.
In 3-5 years? Apple is rumored to intend both headset and glasses. So I hope for all-day AR, with >1080p resolution, eye tracking, and hand tracking, that Just Works. Enabling 3D GUIs. At least shallow ones - avoiding vergence-accommodation conflict in consumer devices may take additional years.
Expect to see the first ever commercial liquid fueled nuclear power plants under construction by 2025, maybe even operational. China will probably be the very first.
I'm a front end web developer. I hope we have better DOM - WebGL integration to enable some really nice effects and optimizations. Most users have reasonably powerful GPUs even on cheap smartphones but the only way to utilise them in a web page is disappointingly separate to the HTML side of things. Hardware accelerated position updates, lists, etc (more than CSS does already) would be awesome.
It's crazy that N64 games (e.g. mario) have really interesting menu screens, with nice animations, an interesting background, menu items that pop and jiggle... all pretty much impossible in HTML for someone like me.
Within five years there should be multiple AIs that specialize in different types of programming. They will have a combination of a natural language interface and interactive screens.
Most of these will be based on starting with existing template applications and tweaking them to handle special cases. They will manage that by training neural networks on datasets that provide requested tweak descriptions and the resulting code or schema changes. They will have a fallback to manually edit formulas or code when necessary. AIs will also be trained to read API descriptions and write code to access them.
Within 10 years fully general purpose AI will be available that can completely replace programmers even for difficult or novel problems.
[+] [-] joeframbach|6 years ago|reply
[+] [-] at-fates-hands|6 years ago|reply
I was just working on some legacy stuff last week. Only worked in IE11, in compatibility mode. I had to use vanilla JS to get some data out of a huge form.
Yeah, document.querySelector('#app') doesn't work. I found out, "It only works on modern browsers" which IE11 doesn't qualify as, which is kind of funny to me. I ended up using document.getElementById('app') instead though, which did work.
[+] [-] fragmede|6 years ago|reply
[+] [-] buboard|6 years ago|reply
[+] [-] falcor84|6 years ago|reply
As the maxim goes, "The best way to learn is to teach". My vision here is that for any new topic a student learns, they (or the instructor) would be able to instantiate an AI agent with relevant preliminary knowledge, for the student to practice on. The student would try to teach the agent facts and/or how to perform basic tasks, and the agent, with some basic metacognition would be able to query the student regarding any unclear or conflicting points.
It definitely won't be anywhere near Turing Test level in 5 years, but I believe that by then we'll have something useful. And beyond that, I think there's real potential here, both for revolutionising education, and further down the line in terms of AGI.
This is slightly tangential, but this article from a few days ago strengthened my belief that we're getting closer - https://reiinakano.com/2019/11/12/solving-probability.html
[+] [-] mncharity|6 years ago|reply
Ones with very non-human in/capability profiles. How to handle expectation management, especially with younger students, is a challenge. Visibly-malfunctioning cartoonish Sparky, the flaky robot dog?... If it misunderstands or forgets what you said, or is emotionally blind, well that's no surprise, it's broken.
[+] [-] mendeza|6 years ago|reply
[+] [-] monk_e_boy|6 years ago|reply
I see that is almost possible now.
[+] [-] maxander|6 years ago|reply
But in that time we will have probably discovered several currently-unappreciated, biologically relevant biochemical mechanisms which we can’t efficiently probe like this. And also it will be considered next to useless because it doesn’t work on single-cell samples. :)
[+] [-] mncharity|6 years ago|reply
[+] [-] mnemonicsloth|6 years ago|reply
[+] [-] dhash|6 years ago|reply
[+] [-] nsomaru|6 years ago|reply
[+] [-] inopinatus|6 years ago|reply
[+] [-] discordance|6 years ago|reply
[+] [-] tiborsaas|6 years ago|reply
But more broadly, JavaScript has already achieved your vision :)
[+] [-] code-is-code|6 years ago|reply
https://github.com/pubkey/rxdb
[+] [-] adonnjohn|6 years ago|reply
[+] [-] huehehue|6 years ago|reply
[+] [-] alexcnwy|6 years ago|reply
In 10 years I think we’ll be able to generate entire movies programmatically.
[+] [-] mNovak|6 years ago|reply
Hard to understate how big this will be for maintaining growth in wireless capacity. I think there's a timeline where we ditch copper coax and even buried fiber in most infrastructure.
[+] [-] o-__-o|6 years ago|reply
[+] [-] overgard|6 years ago|reply
[+] [-] appstorelottery|6 years ago|reply
[+] [-] at-fates-hands|6 years ago|reply
The scary part? I work in the health care industry. This means a lot of the heavy forms processing that goes on will soon be completely automated by robots and without any human interaction or decision making. The future is cold and calculating; without any empathy or consideration for the patient - only the bottom line for the provider is what matters.
Nearly every day I get the statistics of how many jobs our team's bots are replacing. In one instance, we had several bots that effectively replaced over 1,000 FTE's and saved the company close to $3 million. We have over 600 bots right now running which is in the top 1% of all companies in the country and they're looking to expand that number even more.
Nearly every day I feel the moral weight of what I'm doing and it gives me pause.
[+] [-] abrichr|6 years ago|reply
Thanks!
[+] [-] deedubaya|6 years ago|reply
[+] [-] hanniabu|6 years ago|reply
This doesn't add up. You're saying each full time employee earns on average $1k/year?
[+] [-] ci5er|6 years ago|reply
[+] [-] rodolphoarruda|6 years ago|reply
[+] [-] overgard|6 years ago|reply
It also is essentially lost on almost all management that coding is almost always a creative endeavor. After all if the thing you’re building already exists you could just buy it. Creative works are extremely hard to estimate in any meaningful way.
[+] [-] omarhaneef|6 years ago|reply
To give an example: I can't say it takes 1 year to build a widget so this widget will be built in a year. It matters that Max is working on a piece of code but will be out in May and his work won't be sufficiently documented so Shannon won't be able to make enough progress on this particular bug pushing out the project two weeks. The particulars of a particular project matter all the way down.
[+] [-] BjoernKW|6 years ago|reply
More often than not, progress is measured in time wasted rather than value created.
Not only is it very difficult to almost impossible to estimate work items on a time scale larger than a few weeks but this also gives rise to the wrong assumption that time spent equals progress made, not to speak of the wrong incentives a time-based approach to work estimation brings about.
Hence, if you want to assess work in progress and risk you need to define what the terms "progress" and "risk" actually mean for your projects first.
[+] [-] darepublic|6 years ago|reply
[+] [-] rsp1984|6 years ago|reply
In 15 to 20 years every smartphone (or perhaps pair of AR glasses) will have one of these.
[+] [-] georgespencer|6 years ago|reply
I'm a total layperson with cameras and had an abiding sense that there is a fairly hard limitation on what's possible within a smartphone-type housing (because the sensor is limited by the amount of light). I'd love to hear more about how this perceived physical barrier is being overcome!
[+] [-] rscho|6 years ago|reply
You said possible. Not actually realized :-)
[+] [-] harigov|6 years ago|reply
[+] [-] hollerith|6 years ago|reply
[+] [-] fl0under|6 years ago|reply
These are materials with engineered structures at usually the nano or micro scale that have unique/unusual properties. Things like better antennas, imaging devices, or even materials that can perform computations.
As the manufacturing processes develop more I think we will start seeing them more widespread. Defence industries are in particular interested in this at the moment but the potentials are much bigger.
[+] [-] mncharity|6 years ago|reply
In 3-5 years? Apple is rumored to intend both headset and glasses. So I hope for all-day AR, with >1080p resolution, eye tracking, and hand tracking, that Just Works. Enabling 3D GUIs. At least shallow ones - avoiding vergence-accommodation conflict in consumer devices may take additional years.
[+] [-] johnmorrison|6 years ago|reply
[+] [-] onion2k|6 years ago|reply
[+] [-] monk_e_boy|6 years ago|reply
[+] [-] unknown|6 years ago|reply
[deleted]
[+] [-] p1esk|6 years ago|reply
[+] [-] newyankee|6 years ago|reply
[+] [-] ilaksh|6 years ago|reply
Within five years there should be multiple AIs that specialize in different types of programming. They will have a combination of a natural language interface and interactive screens.
Most of these will be based on starting with existing template applications and tweaking them to handle special cases. They will manage that by training neural networks on datasets that provide requested tweak descriptions and the resulting code or schema changes. They will have a fallback to manually edit formulas or code when necessary. AIs will also be trained to read API descriptions and write code to access them.
Within 10 years fully general purpose AI will be available that can completely replace programmers even for difficult or novel problems.