Ask HN: If the CPU redesigned today, with no legacy incentives what can change?
5 points by LangIsAllWeNeed 2 years ago | 21 commentsWhat would a new paradigm for CPU and GPUs look like? Is there some aspect of the legacy system that is not ideal but clearly economically impossible to change. Does the poly fill type design of cpu microcode have drawbacks?
One analogy is planes. Plane design has remained pretty unchanged since 1960s because of regulations, very different design = pay to retrain pilots. Of course planes have still drastically improved across board, but but they are still controrting themselves to adhere to the paradigm of the pilots for economic reasons. Without this, a new plane design might be quite different. (This is part of the Boeing crisis- the software was designed to “polyfill” the change in flight beahvior caused by the contorting themselves to get a better engine on similiar plane frame)
- bjourne 2 years agoI think next gen cpus should be tuned for actual workloads rather than the synthetic benchmarks they are often designed to beat. Essentially latency and not throughput. Right now I'm entering text into a text field on a web page and I don't care about sustained throughput but it would annoy me greatly if, say, switching tabs would cause noticeable lag and/or make the noisy cpu fans spin up.
How do you optimize architectures for event-based "do nothing 99.9% of the time, do A LOT 0.1% of the time" workloads? I don't know. My hunch is that you should prioritize memory latency and pay more attention to worst case rather than best case performance.
- rthomas6 2 years agoI can almost guarantee, for this problem, the current processors could perform to your expectations, and any problem lies in the way the software is written (prioritizing developer time over performance).
- rthomas6 2 years ago
- Leftium 2 years ago1. [Ternary computers]: Base-3 is the most efficient of all integer bases; numbers are stored most economically with trits.
2. Something like a [Lisp machine] that is optimized for functional-style programming with immutable data.
[Ternary computers]: https://www.wikiwand.com/en/Ternary_computer
[Lisp machine]: https://www.wikiwand.com/en/Lisp_machine
- bruce343434 2 years agoHeap trees would be less efficient with trits. What do you mean "most efficient of all integer bases"?
- Leftium 2 years agoTernary minimizes both the length and number of different symbols used to express a range of numbers[1].
Heap trees were probably optimized for binary, since that's what we use. Perhaps there would be a ternary version of heap trees? Or a totally different ternary data structure that serves the same purpose?
[1]: https://math.stackexchange.com/questions/446664/what-is-the-...
- Leftium 2 years agoSlightly off topic, but I'm curious why some users start a comment with "* * *", then edit it later? Is this just a HN thing?
(Parent comment happened to start as one of these "* * *" comments.)
- bruce343434 2 years agoDo you mean my comment? I did not write nor remove "* * *".
- bruce343434 2 years ago
- Leftium 2 years ago
- bruce343434 2 years ago
- Am4TIfIsER0ppos 2 years agoWithout being too radical you might merge some things. Hardware detection like on x86. Feature identification like on x86 (cpuid). Fixed size instructions seem to be "more popular". Variable length vector operations/loops seem intriguing (decent presentation about it at fosdem this year).
What's probably going to happen is more integration and more drm.
- eimrine 2 years agoI refuse to understand your C programming point, but answering your question, a computer might be a Lisp machine and/or having von-Neumann architecture instead of Harvard one. CPU might be more multithread and GPU might be better suited for such things as calling eval().
- LangIsAllWeNeed 2 years agoHow would you understand the c to architecture connection?
I have read that C ideas are baked into how higher level programming languages work, but modern cpus are a lot more parallel and some other things than most languages fail to account for, so the microcode tries to bridge gap.
- t-3 2 years ago> having von-Neumann architecture instead of Harvard one.
I think you got that backwards. Von Neumann is mainstream and a few MCUs use Harvard architecture.
- LangIsAllWeNeed 2 years ago
- warrenm 2 years agoPlanes have remained pretty much unchanged because aerodynamics works the same for everyone
Unless you mean cockpit design :)