"Most importantly, we have to change the culture of hardware design. Today, we don’t have open sharing … "
This, to the 100th power.
The culture in the EDA industry is stuck in the 1950's when it comes to collaboration and sharing, it's very frustrating for newcomers and people who want to learn the trade.
As was pointed out by someone in another hardware related HN thread, what can you expect from an industry that is still stuck calling a component "Intellectual Property"?
The un-sharing is built into the very names used to describe things.
It's not just the EDA industry, it's almost all of hardware including most of the embedded software people. Yeah, I get it - I too am sometimes stuck using some old proprietary compiler with a C standard older than some of the people I work with - but come on. On my last job I used a Qualcomm radio and they ended up giving me close to a thousand zip files of driver revisions going back a decade because their build process kept screwing up the output source code. All it took was running an open source static analysis tool and 200 man-hours of junior developer time to fix the root causes of 90% of their bugs - for a product that has made billions of dollars (and I'm not talking about the GSM/4G chips with crazy standards that require tons of real R&D).
You read that right. Their build system outputs source code, generated from another C code base, using macros to feature gate bug fixes depending on who the customer is. The account managers would sent a list of bugs that a given client had experienced and the backoffice engineers would make a build that fixed only those bugs.
Forget about collaboration and sharing. They haven't even figured out the basic business processes that many software engineers take for granted.
I designed the ABEL language back in the 80's for compiling designs targeted at programmable logic arrays and gate arrays. It was very successful, but it died after a decade or so.
It'd probably be around today and up to date if it was open source. A shame it isn't. I don't even know who owns the rights to it these days, or if whoever owns it even knows they have the rights to it, due to spinoffs and mergers.
>> "Most importantly, we have to change the culture of hardware design. Today, we don’t have open sharing … "
I'll have popcorn ready for the eventuality where IP blocks are widely available under GPL type of FOSS licenses and Intel|AMD|ARM|TI|... is eventually found to include one or more of those open sourced blocks with incompatible license in their chips.
They have been some attempt. E.g: the mit project called sirus that used python 2.5 as a dsl to describe high level components you could combine and reuse and then process to generate system c or verilog.
Unfortunaly, while the tool is pretty nice, it never resulted in major adoption (qualcom has some tool using it internally and a few others) and we haven't seen the idea of making reusable libs and components florish.
Somebody would need to find this project and up it to python 3.6. With current tooling, it would make writting code in it really nice and ease the creation of reusable components.
Every CAD system I know of supports ways of group circuits into modules and libraries for multiple Instantation. And those libraries are distributable.
I wonder how a system would turn out in which all electronics/software are forced to have both their diagrams/schematics and code published, but in exchange this is copyrighted for like ~7 years or so.
My advisor at Stanford is working on an open-source hardware toolchain to solve these exact problems. The Agile Hardware Center is trying to bring software methodologies of rapid prototyping and pervasive code sharing/reuse to ASICs, CGRAs, and FPGAs: https://aha.stanford.edu/
It’s a bit ironic that a decade or two ago, there was a drive to make software development look more like hardware development (as if it were somehow better), but the trend has swung all the way around.
I just got back from the Design Automation Conference in San Francisco. It is one of the major EDA conferences. Andreas Olofsson gave a talk about the silicon compiler. There was serious discussion about open source EDA. As far as I could tell it is still unclear what the role of academia will be. It seems tricky to align academic incentives with the implementation, and most importantly, maintenance of an open source EDA stack. However, there is quite some buzz and people are enthused. A first workshop, the "Workshop on Open-Source EDA Technology" (WOSET) has been organized.
I also thought I'd try to answer some questions that I've seen in the comments. Disclaimer: as a lowly PhD student I am only privy to some information. I'm answering to the best of my knowledge.
1) As mentioned by hardwarefriend, synthesis tools are standard in ASIC/FPGA design flows. However, chip design currently often still takes a lot of manual work and/or stitching together of tools. The main goal of the compiler is to create a push-button solution. Designing a new chip should be as simple as cloning a design from GitHub and calling "make" on the silicon compiler.
2) Related to (1). The focus is on automation rather than performance. We are okay with sacrificing performance as long as compiler users don't have to deal with individual build steps.
3) There should be support for both digital, analog, and mixed-signal designs.
4) Rest assured that people are aware of yosys and related tools. In fact, Clifford was present at the event :-) Other (academic) open source EDA tools include the ABC for logic synthesis & verification, the EPFL logic synthesis libraries (disclaimer: co-author), and Rsyn for physicial design. There are many others, I'm certainly not familiar with all of them. Compiling a library of available open source tools is part of the project.
Edit: to be clear, WOSET has been planned, but will be held in November. Submissions are open until August 15.
> Compiling a library of available open source tools is part of the project.
Compiling? What's that even mean? What about funding?
I wrote arachne-pnr, the place and route tool for the icestorm stack. My situation changed, I didn't see a way to fund myself to work on it and I didn't have the time to work on it in my spare time. I assume that's one of the reasons Clifford is planning to use VPR going forward (that, and it is almost certainly more mature, has institutional support at Toronto, etc.) I would have loved to work on EDA tools. I've moved on to other things, but I wonder if these programs will fund the likes of Yosys/SymbiFlow/icestorm/arachne-pnr.
On the opposite side of the spectrum, there's Chuck Moore (Forth creator) who in trying to find the simplest combination of software and hardware for his projects devoted a lot of time into a DIY VLSI CAD system. Fascinating history behind it, although the actual OKAD system is essentially trade secret for his company.
Side note: when people complain about the military budget, projects like these should be noted. Political reality in America, today, is military R&D and jobs programs are easier to fund than civilian ones; so that’s where projects go to live.
This $100M research project is equivalent to about one fighter plane (without the ammunition, I guess). Imagine what we could fund for the cost of an aircraft carrier.
So instead of explicitly funding research we should implicitly fund it in a roundabout way and you're saying people shouldn't complain about this state of things?
I have questions, if anyone knows something about hardware. What would a "silicon compiler" let one do? What exactly gets easier/cheaper and what exactly could new chip designs yield?
It's difficult for me to be sure, since the article makes it sound as if they're attempting something novel, but synthesis tools are standard in ASIC/FPGA design flows.
Currently, the best synthesis tools are closed-source and extremely expensive. Imagine the benefit of having gcc/clang be free software. That is the kind of effect that is at stake here.
Usually, hardware designers will write RTL code (Verilog/VHDL) which describes the hardware slightly above the gate level. In order to turn this description into a web of logic gates (called a netlist), the design is processed by a synthesizing program. The produced netlist describes exactly how many AND, NAND, OR, etc. gates are used and how they're connected, but it doesn't actually describe where the gates are placed on the chip or the route the interconnections take to connect the gates. To generate that info, the netlist is fed into another synthesis tool (usually called place and route).
This is a simplified version, but even at this level of detail, there are important factors affecting chips.
- How many gates? (less might be better)
- How far are the gates from each other? (closer is better; less power, area, cost, timing)
- How often will the gates switch? (less is better)
- More....
More advanced synthesis tools improve area, cost, power, timing. They also allow designers to have less expertise and still obtain the same result as experienced designers by optimizing out micro-level inefficiencies in the design (though experienced designers will also lean on the synthesis tool).
I only have a vague idea aboutthe first two of your questions: I guess they use the term silicon compiler as a description of an ideal state in chip synthesis software where you can go from high level logic descriptions (VHDL source) to the final chip masks without any human intervention. Right now, this requires a lot of manual work in intermediate stages of the process. Being able to do away with that would simplify and speed up the process. But this also means tackling pretty nasty NP complete optimization problems.
I don't quite get the open source angle in the comments here.
If I managed to get my grubby hands on a moderately modern computer, I can use all manner of open source software and I can create wonderful new software.
The barrier of entry is fairly low in rich countries.
If AMD open sourced all the design aspects of their chips,
I would have to get a loan to build 100 million fab to have any practical manner to enjoy it?
I can see that if Intel/AMD/NVidia/Apple shared all aspects fo their chips cross pollination might bring great things, and academic research would be boosted and might end up giving back more to the community at large, but you are talking about very few entities across the world that can afford fabs.
I believe you have it backwards: this is exactly why there is actually very little risk for the big guys to actually share their knowledge.
The financial entry barrier for building a fab is so huge that what in heaven's name would intel lose if they published the RTL for - say - their integer division hardware to show and teach the whole world how it's done when real professionals take a stab at it?
And if they're scared AMD might copy their integer division, why not publish the Verilog code from 2 or 3 generation old h/w? (and this is probably a bad example, I believe AMD and Intel are essentially done competing on stuff like that).
But what I am talking about here is basically unthinkable given the current culture in the EDA world: a person suggesting this inside one of the big shops would be committing career suicice.
Conversely, if you navigate EDA discussion boards a little bit, there is no end to the snarky or sometimes downright insulting comments made by big shop insiders about how lame and terribly inefficient the open source hardware designs published on the net actually are.
In other words: mocking outsiders are their ignorance instead of teaching them how to do cool stuff. That's the culture of the EDA world. Time for a change.
> If AMD open sourced all the design aspects of their chips, I would have to get a loan to build 100 million fab to have any practical manner to enjoy it?
Try $1 to 5 billion. We're talking about something that over half of all extant nation states wouldn't be able to pull off without devoting 10-50% of their annual GDP to the project.
The argument to share your source code extends beyond the argument that the average end-user can take that code, alter it and redistribute it. The bar for entry for contributing to hardware is not 'has their own multi-billion dollar fab center'.
What even is this project? There are no details on the DARPA page either.
Is it for PCB design, ASIC design or both? Is a constant current source also considered a “small chip” or just digital designs?
Basically every EDA tool already has the ability to group sub modules which one could distribute as open source if they chose.
Do it in kicad and put your circuit into a hierarchal symbol if you must be all open source.
I get hard IP blocks from vendors all the time for inclusion in our ASICs.
It’s not the EDA tools that are preventing “openness”.
I was just joking the other day how all the PCB designs I’m reviewing lately are just conglomerations of app note circuits and it’s really boring. So to me it seems like there’s plenty of design reuse. :)
I'm surprised to see no recognition of yosys, arachne-pnr and the icestorm tools which together are a free and open source HDL tool chain which already exists and is pretty widely used.
These two projects are exactly the road the EDA industry should be taking.
Unfortunately, they make very slow progress because they have to painstakingly reverse-engineer everything (with the possible exception of Lattice stuff).
For Xilinx chips, where exactly nothing is publicly documented at the lower levels of the stack, they have to spend mountains of time re-discovering everything.
Even if I deeply admire the effort and how far they've gotten, I can't help but think: what a terrible waste of human talent and time.
Edit: I once asked a Xilinx employee why they didn't OpenSource their entire software stack, because it struck me that they were in the chip manufacturing business, and not in the toolchain business (a blisteringly obvious fact when you look at the quality of such monstrosities as, e.g., Vivado), and that OpenSourcing the tools would potentially enlarge their potential target market by a large margin.
The culture is so broken in that space that I don't think he even actually understood the question.
I wonder if the decline of Moore’s Law will eventually lead to the commotidization of ASIC fabrication?
Of course fabricating a chip will never be as cheap as writing a bit of software, but maybe it will eventually be as cheap as, say, injection molding a piece of plastic?
Heading the opposite direction - at least for the bleeding edge 7nm/5nm/3nm ASICs you want in your next computer or smartphone.
Manufacturing costs (particularly fixed costs) are going up exponentially. We're getting stuck on economics before physics. You need to be able to sell 10million+ parts to cover your costs.
There's more opportunities if you don't need the best performance or lowest power and use an older manufacturing process node like 65nm.
Having these tools as open source and freely available is a huge deal for so many industries. I've worked with these tools at an academic level and now at a startup, and it's amazing the magnitude of this enabling technology. Just the tooling investment will be huge, making the core solvers and algorithms more accessible should spawn a whole new wave of startups/research in effectivley employing them. Just these days, I've heard of my friends building theorem provers for EVM bytecode to formally check smart contracts to eliminate bugs like these [0].
These synthesis tools roughly break down like this:
1. Specify your "program"
- In EDA tools, your program is specified in Verilog/VHDL and turns into a netlist, the actual wiring of the gates together.
- In 3D printers, your "program" is the CAD model, which can be represented as a series of piecewise triple integrals
- In some robots, your program is the set of goals you'd like to accomplish
In this stage, it's representation and user friendliness that is king. CAD programs make intuitive sense, and have the expressive power to be able to describe almost anything.
Industrial tools will leverage this high-level representation for a variety of uses, like in the CAD of an airplane, checking if maintenance techs can physically reach every screw, or in EDA providing enough information for simulation of the chip or high-level compilation (Chisel)
2. Restructure things until you get to a an NP-complete problem, ideally in the form "Minimize cost subject to some constraints". The result of this optimization can be used to construct a valid program in a lower-level language.
- In EDA, this problem looks like "minimize the silicon die area used and layers used and power used subject to the timing requirements of the original Verilog", where the low level representation is the physical realization of the chip
- In 3D printers it's something like "minimize time spent printing subject to it being possible to print with the desired infill". Support generation and other things can be rolled in to this to make it possible to print.
Here, fun pieces of software in this field of optimization are used; Things like Clasp for Answer Set Programming, Gurobi/CPLEX for Mixed Integer programming or Linear programs, SMT/SAT solvers like Z3 or CVC4 for formal logic proving.
A lot of engineering work goes into these solvers, with domain specific extensions driving a lot of progress[1]. We owe a substantial debt to the researchers and industries that have developed solving strategies for these problems, it makes up a significant amount of why we can have nice things, from what frequencies your phone uses [2], to how the NBA decides to schedule basketball games.
This is the stuff that really helps to have as public knowledge. The solvers at their base are quite good, but seeding them with the right domain-specific heuristics makes so many classes of real-world problems solvable.
3. Extract your solution and generate code
- I'm not sure what this looks like in EDA, my rough guess is a physical layout or mask set with the proper fuckyness to account for the strange effects at that small of a scale.
- For 3D printers, this is the emitted G-code
- For robots, it's a full motion plan that results in all goals being completed in an efficient manner.
The participating companies makes chips that requires a huge amount of resources to make them happen: design engineering time, CAD software, tape-out cost, validation engineering time, ...
The CAD software is expensive, but it's not barrier of entry expensive compared to design engineering time, tape-out cost etc.
If the CAD software cost gets reduced, it would result in a cost reduction of all companies involved while still having a barrier of entries that's be way too high for anyone but the best funded companies.
[+] [-] ur-whale|7 years ago|reply
This, to the 100th power.
The culture in the EDA industry is stuck in the 1950's when it comes to collaboration and sharing, it's very frustrating for newcomers and people who want to learn the trade.
As was pointed out by someone in another hardware related HN thread, what can you expect from an industry that is still stuck calling a component "Intellectual Property"?
The un-sharing is built into the very names used to describe things.
[+] [-] civilitty|7 years ago|reply
You read that right. Their build system outputs source code, generated from another C code base, using macros to feature gate bug fixes depending on who the customer is. The account managers would sent a list of bugs that a given client had experienced and the backoffice engineers would make a build that fixed only those bugs.
Forget about collaboration and sharing. They haven't even figured out the basic business processes that many software engineers take for granted.
[+] [-] WalterBright|7 years ago|reply
It'd probably be around today and up to date if it was open source. A shame it isn't. I don't even know who owns the rights to it these days, or if whoever owns it even knows they have the rights to it, due to spinoffs and mergers.
[+] [-] rixrax|7 years ago|reply
I'll have popcorn ready for the eventuality where IP blocks are widely available under GPL type of FOSS licenses and Intel|AMD|ARM|TI|... is eventually found to include one or more of those open sourced blocks with incompatible license in their chips.
[+] [-] sametmax|7 years ago|reply
Unfortunaly, while the tool is pretty nice, it never resulted in major adoption (qualcom has some tool using it internally and a few others) and we haven't seen the idea of making reusable libs and components florish.
Somebody would need to find this project and up it to python 3.6. With current tooling, it would make writting code in it really nice and ease the creation of reusable components.
Reusability in raw verilig is hard
[+] [-] esmi|7 years ago|reply
Every CAD system I know of supports ways of group circuits into modules and libraries for multiple Instantation. And those libraries are distributable.
[+] [-] tryptophan|7 years ago|reply
[+] [-] unknown|7 years ago|reply
[deleted]
[+] [-] wcrichton|7 years ago|reply
[+] [-] seanmcdirmid|7 years ago|reply
[+] [-] mips_avatar|7 years ago|reply
[+] [-] whaaswijk|7 years ago|reply
I also thought I'd try to answer some questions that I've seen in the comments. Disclaimer: as a lowly PhD student I am only privy to some information. I'm answering to the best of my knowledge.
1) As mentioned by hardwarefriend, synthesis tools are standard in ASIC/FPGA design flows. However, chip design currently often still takes a lot of manual work and/or stitching together of tools. The main goal of the compiler is to create a push-button solution. Designing a new chip should be as simple as cloning a design from GitHub and calling "make" on the silicon compiler.
2) Related to (1). The focus is on automation rather than performance. We are okay with sacrificing performance as long as compiler users don't have to deal with individual build steps.
3) There should be support for both digital, analog, and mixed-signal designs.
4) Rest assured that people are aware of yosys and related tools. In fact, Clifford was present at the event :-) Other (academic) open source EDA tools include the ABC for logic synthesis & verification, the EPFL logic synthesis libraries (disclaimer: co-author), and Rsyn for physicial design. There are many others, I'm certainly not familiar with all of them. Compiling a library of available open source tools is part of the project.
Edit: to be clear, WOSET has been planned, but will be held in November. Submissions are open until August 15.
[+] [-] cottonseed|7 years ago|reply
Compiling? What's that even mean? What about funding?
I wrote arachne-pnr, the place and route tool for the icestorm stack. My situation changed, I didn't see a way to fund myself to work on it and I didn't have the time to work on it in my spare time. I assume that's one of the reasons Clifford is planning to use VPR going forward (that, and it is almost certainly more mature, has institutional support at Toronto, etc.) I would have loved to work on EDA tools. I've moved on to other things, but I wonder if these programs will fund the likes of Yosys/SymbiFlow/icestorm/arachne-pnr.
[+] [-] eternauta3k|7 years ago|reply
[+] [-] adrianmonk|7 years ago|reply
[+] [-] 437598735|7 years ago|reply
His site has been down for a while, but someone thankfully mirrored most of the pages here: https://colorforth.github.io/vlsi.html
More history about OKAD, plus links to more about Forth both software and hardware: http://www.ultratechnology.com/okad.htm
[+] [-] JumpCrisscross|7 years ago|reply
[+] [-] jayd16|7 years ago|reply
[+] [-] adrianN|7 years ago|reply
[+] [-] typon|7 years ago|reply
[+] [-] mmiller9|7 years ago|reply
[+] [-] jonhendry18|7 years ago|reply
[+] [-] dalbasal|7 years ago|reply
[+] [-] hardwarefriend|7 years ago|reply
Currently, the best synthesis tools are closed-source and extremely expensive. Imagine the benefit of having gcc/clang be free software. That is the kind of effect that is at stake here.
Usually, hardware designers will write RTL code (Verilog/VHDL) which describes the hardware slightly above the gate level. In order to turn this description into a web of logic gates (called a netlist), the design is processed by a synthesizing program. The produced netlist describes exactly how many AND, NAND, OR, etc. gates are used and how they're connected, but it doesn't actually describe where the gates are placed on the chip or the route the interconnections take to connect the gates. To generate that info, the netlist is fed into another synthesis tool (usually called place and route).
This is a simplified version, but even at this level of detail, there are important factors affecting chips. - How many gates? (less might be better) - How far are the gates from each other? (closer is better; less power, area, cost, timing) - How often will the gates switch? (less is better) - More....
More advanced synthesis tools improve area, cost, power, timing. They also allow designers to have less expertise and still obtain the same result as experienced designers by optimizing out micro-level inefficiencies in the design (though experienced designers will also lean on the synthesis tool).
[+] [-] gmueckl|7 years ago|reply
[+] [-] B1FF_PSUVM|7 years ago|reply
PDF is the document where you write the specs.
GDS (actually GDSII) is the geometry description of chip layers you send to the foundry for fabrication.
[+] [-] ThinkBeat|7 years ago|reply
If I managed to get my grubby hands on a moderately modern computer, I can use all manner of open source software and I can create wonderful new software. The barrier of entry is fairly low in rich countries.
If AMD open sourced all the design aspects of their chips, I would have to get a loan to build 100 million fab to have any practical manner to enjoy it?
I can see that if Intel/AMD/NVidia/Apple shared all aspects fo their chips cross pollination might bring great things, and academic research would be boosted and might end up giving back more to the community at large, but you are talking about very few entities across the world that can afford fabs.
[+] [-] ur-whale|7 years ago|reply
The financial entry barrier for building a fab is so huge that what in heaven's name would intel lose if they published the RTL for - say - their integer division hardware to show and teach the whole world how it's done when real professionals take a stab at it?
And if they're scared AMD might copy their integer division, why not publish the Verilog code from 2 or 3 generation old h/w? (and this is probably a bad example, I believe AMD and Intel are essentially done competing on stuff like that).
But what I am talking about here is basically unthinkable given the current culture in the EDA world: a person suggesting this inside one of the big shops would be committing career suicice.
Conversely, if you navigate EDA discussion boards a little bit, there is no end to the snarky or sometimes downright insulting comments made by big shop insiders about how lame and terribly inefficient the open source hardware designs published on the net actually are.
In other words: mocking outsiders are their ignorance instead of teaching them how to do cool stuff. That's the culture of the EDA world. Time for a change.
[+] [-] civilitty|7 years ago|reply
Try $1 to 5 billion. We're talking about something that over half of all extant nation states wouldn't be able to pull off without devoting 10-50% of their annual GDP to the project.
[+] [-] s3krit|7 years ago|reply
[+] [-] tasty_freeze|7 years ago|reply
http://nvdla.org/index.html
The source code is on github.
[+] [-] esmi|7 years ago|reply
Is it for PCB design, ASIC design or both? Is a constant current source also considered a “small chip” or just digital designs?
Basically every EDA tool already has the ability to group sub modules which one could distribute as open source if they chose.
Do it in kicad and put your circuit into a hierarchal symbol if you must be all open source.
I get hard IP blocks from vendors all the time for inclusion in our ASICs.
It’s not the EDA tools that are preventing “openness”.
I was just joking the other day how all the PCB designs I’m reviewing lately are just conglomerations of app note circuits and it’s really boring. So to me it seems like there’s plenty of design reuse. :)
[+] [-] zik|7 years ago|reply
[+] [-] ur-whale|7 years ago|reply
Unfortunately, they make very slow progress because they have to painstakingly reverse-engineer everything (with the possible exception of Lattice stuff).
For Xilinx chips, where exactly nothing is publicly documented at the lower levels of the stack, they have to spend mountains of time re-discovering everything.
Even if I deeply admire the effort and how far they've gotten, I can't help but think: what a terrible waste of human talent and time.
Edit: I once asked a Xilinx employee why they didn't OpenSource their entire software stack, because it struck me that they were in the chip manufacturing business, and not in the toolchain business (a blisteringly obvious fact when you look at the quality of such monstrosities as, e.g., Vivado), and that OpenSourcing the tools would potentially enlarge their potential target market by a large margin.
The culture is so broken in that space that I don't think he even actually understood the question.
[+] [-] alexbeloi|7 years ago|reply
[+] [-] slededit|7 years ago|reply
[+] [-] kenferry|7 years ago|reply
At $250,000 per year per person, that supports 100 people for four years.
I… suppose that's not completely insane?
[+] [-] eleitl|7 years ago|reply
[+] [-] archgoon|7 years ago|reply
[+] [-] cottonseed|7 years ago|reply
[+] [-] tlrobinson|7 years ago|reply
Of course fabricating a chip will never be as cheap as writing a bit of software, but maybe it will eventually be as cheap as, say, injection molding a piece of plastic?
[+] [-] StringyBob|7 years ago|reply
Manufacturing costs (particularly fixed costs) are going up exponentially. We're getting stuck on economics before physics. You need to be able to sell 10million+ parts to cover your costs.
There's more opportunities if you don't need the best performance or lowest power and use an older manufacturing process node like 65nm.
[+] [-] Aeolus98|7 years ago|reply
Once you have a mask set and fab time, it's off to the races. IMO $1M really isn't bad for a simple chip run.
[0] https://anysilicon.com/semiconductor-wafer-mask-costs/
[+] [-] Aeolus98|7 years ago|reply
Having these tools as open source and freely available is a huge deal for so many industries. I've worked with these tools at an academic level and now at a startup, and it's amazing the magnitude of this enabling technology. Just the tooling investment will be huge, making the core solvers and algorithms more accessible should spawn a whole new wave of startups/research in effectivley employing them. Just these days, I've heard of my friends building theorem provers for EVM bytecode to formally check smart contracts to eliminate bugs like these [0].
These synthesis tools roughly break down like this:
1. Specify your "program"
- In EDA tools, your program is specified in Verilog/VHDL and turns into a netlist, the actual wiring of the gates together.
- In 3D printers, your "program" is the CAD model, which can be represented as a series of piecewise triple integrals
- In some robots, your program is the set of goals you'd like to accomplish
In this stage, it's representation and user friendliness that is king. CAD programs make intuitive sense, and have the expressive power to be able to describe almost anything. Industrial tools will leverage this high-level representation for a variety of uses, like in the CAD of an airplane, checking if maintenance techs can physically reach every screw, or in EDA providing enough information for simulation of the chip or high-level compilation (Chisel)
2. Restructure things until you get to a an NP-complete problem, ideally in the form "Minimize cost subject to some constraints". The result of this optimization can be used to construct a valid program in a lower-level language.
- In EDA, this problem looks like "minimize the silicon die area used and layers used and power used subject to the timing requirements of the original Verilog", where the low level representation is the physical realization of the chip
- In 3D printers it's something like "minimize time spent printing subject to it being possible to print with the desired infill". Support generation and other things can be rolled in to this to make it possible to print.
Here, fun pieces of software in this field of optimization are used; Things like Clasp for Answer Set Programming, Gurobi/CPLEX for Mixed Integer programming or Linear programs, SMT/SAT solvers like Z3 or CVC4 for formal logic proving.
A lot of engineering work goes into these solvers, with domain specific extensions driving a lot of progress[1]. We owe a substantial debt to the researchers and industries that have developed solving strategies for these problems, it makes up a significant amount of why we can have nice things, from what frequencies your phone uses [2], to how the NBA decides to schedule basketball games. This is the stuff that really helps to have as public knowledge. The solvers at their base are quite good, but seeding them with the right domain-specific heuristics makes so many classes of real-world problems solvable.
3. Extract your solution and generate code
- I'm not sure what this looks like in EDA, my rough guess is a physical layout or mask set with the proper fuckyness to account for the strange effects at that small of a scale.
- For 3D printers, this is the emitted G-code
- For robots, it's a full motion plan that results in all goals being completed in an efficient manner.
[0] https://hackernoon.com/what-caused-the-latest-100-million-et...
[1] https://slideplayer.com/slide/11885400/
[2] https://www.youtube.com/watch?v=Xz-jNQnToA0&t=1s
[+] [-] absurdmind|7 years ago|reply
[0] https://en.m.wikipedia.org/wiki/Bluespec
[+] [-] baybal2|7 years ago|reply
[+] [-] petra|7 years ago|reply
Won't this project reduce barriers to entry for their industry ? and if so, isn't it against their interests to participate?
[+] [-] ekroa|7 years ago|reply
[+] [-] ur-whale|7 years ago|reply
Quite the contrary, lowering barriers creates more opportunities and thereby new scope for growth.
It took MSFT 30 years to finally understand that lesson and start offering a free suite of dev tools.
The EDA industry still hasn't groked that lesson.
[+] [-] TomVDB|7 years ago|reply
The CAD software is expensive, but it's not barrier of entry expensive compared to design engineering time, tape-out cost etc.
If the CAD software cost gets reduced, it would result in a cost reduction of all companies involved while still having a barrier of entries that's be way too high for anyone but the best funded companies.