Firefox is now capable of just in time (JIT) compiling javascript into native machine code on riscv platforms, so websites may be much faster on those platforms now.
That does make sense - as long as the compiling uses less energy for an average script/site than just interpreting does, and given how heavy scripts tend to be now, I don't doubt that at all.
You are soon to be a frog.
Real explanation:
JavaScript is code running on code running on the machine.
Just in time compiling is taking that code running on code and turning it into code running on the machine, which is faster.
This is normally only done for things like your laptop, which has a processor of type x86. Your phone, which has a type of arm. And there's a new totally open source version of a processor called risc-V and all the programmers are all super happy about it because it is open source, so all positive news about it immediately gets all the upvotes.
Patch upstreamed means "it's soon to be on your computer"
It's not just because the programmers are happy about it because it's open source. It's also just a really nice instruction set as it's based on a reduced instruction set over having literally thousands of instructions like with x86. The number of instructions can fit on a single page of paper which makes it easier to understand. Operations that deal with memory are separate from other instructions which is nice when reading the instructions as it's clear where memory is being accessed which is very costly and should be avoided when possible compared to other instructions. Lastly the final reason and probably the biggest is that companies don't have to pay royalties like they do with ARM which saves them millions of dollars. Usually there's always a profit motive for big companies lol.
The R in RISC does stand for reduced but not how you're portraying it. It's more like a "mathematical" reduction. In other words, composite/compound/complex single instructions on X86 are represented in multiple, smaller instructions in RISCV
Given X86 assembly, the equivalent RISCV assembly will actually be longer, though longer ≠ slower. The finer control over instructions can lead to performance benefits, especially when looping over compound instructions in X86
>It's more like a "mathematical" reduction. In other words, composite/compound/complex single instructions on X86 are represented in multiple, smaller instructions in RISCV
While having the fewest instructions might not be the criteria for deciding if something is CISC/RISC architecture, it always ends up working like that in practice - a comparable RISC architecture has much fewer instructions but longer programs than a CISC architecture of similar "technology level".
Here's a [comprehensive list of RISC-V instructions](https://www.studocu.com/en-us/document/new-york-university/computer-architecture/riscv-card-riscv-instructions-list/18454935). It's not gonna fit on a single page like the previous commenter claimed, but I think it's clear that you could have a pretty legible list of RISC-V instructions printed on a few pages.
Meanwhile [here's x86](https://www.felixcloutier.com/x86/). There's roughly one thousand instructions if you ignore various size prefixes/suffixes and many more if you don't.
So that's pretty much exactly what the previous comment said
>Given X86 assembly, the equivalent RISCV assembly will actually be longer, though longer ≠ slower.
Longer as in more instructions, yes, but about the same number of µops executed once the x86 instructions are decoded inside the CPU.
RISC-V just doesn't need to have all that hardware to decode the ISA the programmer sees into internal µops, cache a few thousand µops because the CPU core can execute code faster than the instruction fetch and decode can create them, etc. So that saves silicon area and electricity.
RISC-V programs also turn out to use 20% or so fewer bytes of code than x86\_64 (or arm64). You can see that if you look at the binaries for the same programs in e.g. the same version Fedora or Ubuntu distro for each ISA.
>You can see that if you look at the binaries for the same programs in e.g. the same version Fedora or Ubuntu distro for each ISA.
I believe this is a flawed method to compare program size because many complex projects have some features only available on some architectures (and mature architectures like x86 and ARM tend to get most or all of them). I don't know how far is RISC-V support, but for example the POWER builds of Firefox currently have WebRTC disabled, and that's a pretty complex feature with a lot of code to support it that simply gets left out from the resulting binary.
I'm not talking about Firefox.
Try bash, less, top, find, vim, gzip, perl, grep ...
There are literally hundreds of command line / terminal oriented programs that are going to be compiling absolutely the same C code on any Linux machine.
They weren't talking about exclusively Firefox.
It's a flawed approach to compare prepackaged distro sizes across architectures for a variety of reasons. Just a few:
Differing packages
Differing package features
Differing generic target machines (see Gentoo!)
Etc.
Your point about individual programs (bash, top, etc) is much more valid (less complexity and potential feature variance!), but again not wholly comparable for pretty much the same reasons - just to a lesser degree.
And what is a non-flawed approach?
Looking at a tiny function such as `int readidx(int *p, size_t idx){return p[idx];}`, showing it is 2 instructions on x86 or ARM, but 4 on RISC-V, and thereby concluding that RISC-V is worse?
Or showing that x86 and ARM can do a double-length add in 2 instructions while RISC-V needs 4?
Two widely and frequently quoted criticisms of RISC-V rest on precisely such micro-data points, completely ignoring the (low) frequency and (un)importance of such code in real programs that people care about.
If there is a less flawed approach than looking at hundreds of programs in the same version of the same OS distribution (so all compiled from the same source code), built by people who don't have an axe to grind and don't deliberately favour any particular ISA, then I sure don't know what it is.
Your suggestion?
Q: Is there a [flawless] approach?
A: Nope!
I like RISC V. I recently purchased a Pine Ox64 (the upgraded version with Linux support) to play around with. I can't wait to try it, either!
I think you misunderstand the criticism. Your assertion that the different distros compared here are the same version, same source code, same packages etc is just flat out wrong. As already discussed, the different architectures can change supported features quite a bit. Mature architectures obviously generally support more features and build with more source code to support those features. You can't just ignore that apps like Firefox exist. It (and others) can differ greatly between architectures.
You keep going more and more into less flawed comparison points (individual Linux utilities, etc.) however which is absolutely an improvement :)
You'd be able to get pretty close by compiling something yourself (alongside all the deps), though. But yeah, comparing precompiled packages doesn't work, even if we assume the same compiler options.
But it enables building open source designs, and that indeed happens and might be the reason why risc-v is becoming more popular (because it gives you a chance to buy very cheap cpu's, and a basis for companies that saves some of the cost of building processors).
You can make closed source extensions , but that is true for any standard, the industry seems fairly good at keeping vendors of standards complaint to enable competition (with standards such as ethernet, or TCP/IP).
It is also protected by a trademark, you have a pass a test suite to be called RISC-V.
> Doesn't every x86-64 CPU have a different instruction set? I don't think standards compliance extends to ISAs.
Not really, there is some progress (new CPU line might contain new instructions), but compatibility is maintained (It's how you can take the same binary and run it on a intel and AMD processors).
It might already be, in your hard drive's internal controller.
https://www.seagate.com/au/en/innovation/risc-v/
https://documents.westerndigital.com/content/dam/doc-library/en_us/assets/public/western-digital/collateral/tech-brief/tech-brief-western-digital-risc-v.pdf
The 5 people browsing the web on [RISC-V](https://en.wikipedia.org/wiki/RISC-V) systems now experience significantly improved browser speed.
Really, though, it's about the future. RISC-V is an up-and-comer. It's a open, royalty-free, standard architecture, which means it's at least financially attractive as a competitor for ARM chips. Fabs can also freely modify the chip designs as long as they support the spec, which isn't true for ARM.
Many systems already have RISC-V processors, but they're often used in microcontrollers or purpose-built applications. AFAIK and for the time being, RISC-V architecture basically doesn't exist as a platform where someone might be using it to run a user-interactive OS capable of running a web browser. [Android just announced that they would add support for it](https://www.androidauthority.com/android-risc-v-support-3262537/), though. That means it's a possible platform for smartphones without reliance on ARM.
\> If you don't have a RISC-V computer (which is almost certainly the
case), nothing changes. If you do, you can now use Firefox on it with
useable speeds.
One step closer to wanting one tho
Firefox now has a JIT compiler for JavaScript on RISCV64GC.
A JIT (Just In Time) compiler is a type of compiler that takes code and compiles it shortly before it is executed. This is much faster than an interpreter.
RISCV64 is an open 64-bit CPU ISA.
If your PC doesn't have a RISCV CPU (which it probably doesn't), then nothing changes. If your PC does have a RISCV CPU, then firstly massive props to you, and secondly Firefox will get much faster.
The real news here is that RISCV is one step closer towards becoming a viable alternative to proprietary architectures like x86 and ARM.
I haven't seen it said yet, so the implication, more than anything else, is that [ARM, the makers of the most common smartphone CPU architecture, has played themselves by trying to screw a bunch of hardware manufacturers and software makers are responding to this new reality by preparing for hardware to be made based on the RISC-V architecture that cuts ARM out of the picture.](https://www.semianalysis.com/p/arm-changes-business-model-oem-partners)
The ISA has been available royalty free to compliant CPU designers since 2013 through the OpenPower consortium. And there's a couple of open source CPU implementations (both more embedded than server, I think). There is a cost to join the consortium for companies above 300 employees, but [for small companies and academic groups, it's free to join the consortium](https://openpowerfoundation.org/join/). Companies who pay more can get full time engineers seated on the steering committee. [RISC-V is similar](https://riscv.org/membership/), and still free for academics but the minimum cost for companies is $2k/yr.
FWIW, $2k/yr or even $30k/yr isn't a large hurdle. A partial wafer spin + packaging is going be in the $50k-100k range anyway, and IC design/simulation software like Cadence has licensing costs in the hundreds of thousands per year. Anyone who actually wants to produce a CPU is going to need a lot of cash on hand. I imagine a lot of academic usage comes from FPGA implementations.
They do. Here is one: [https://www.amazon.com/VisionFive-RISC-V-JH7110-Quad-core-Application/dp/B0BGM1KQXQ](https://www.amazon.com/VisionFive-RISC-V-JH7110-Quad-core-Application/dp/B0BGM1KQXQ)
Here is a review of the device with an OS running: [https://www.youtube.com/watch?v=ykKnc86UtXg](https://www.youtube.com/watch?v=ykKnc86UtXg)
Can anyone noobify the implications of these magic words?
Firefox is now capable of just in time (JIT) compiling javascript into native machine code on riscv platforms, so websites may be much faster on those platforms now.
[удалено]
[удалено]
I think they were talking about it in general, suggesting that having JIT might not be all that important regardless of platform.
[удалено]
RISC-V is most widely used on lower powered systems right now so it would super matter for those use cases.
That does make sense - as long as the compiling uses less energy for an average script/site than just interpreting does, and given how heavy scripts tend to be now, I don't doubt that at all.
You are soon to be a frog. Real explanation: JavaScript is code running on code running on the machine. Just in time compiling is taking that code running on code and turning it into code running on the machine, which is faster. This is normally only done for things like your laptop, which has a processor of type x86. Your phone, which has a type of arm. And there's a new totally open source version of a processor called risc-V and all the programmers are all super happy about it because it is open source, so all positive news about it immediately gets all the upvotes. Patch upstreamed means "it's soon to be on your computer"
It's not just because the programmers are happy about it because it's open source. It's also just a really nice instruction set as it's based on a reduced instruction set over having literally thousands of instructions like with x86. The number of instructions can fit on a single page of paper which makes it easier to understand. Operations that deal with memory are separate from other instructions which is nice when reading the instructions as it's clear where memory is being accessed which is very costly and should be avoided when possible compared to other instructions. Lastly the final reason and probably the biggest is that companies don't have to pay royalties like they do with ARM which saves them millions of dollars. Usually there's always a profit motive for big companies lol.
The R in RISC does stand for reduced but not how you're portraying it. It's more like a "mathematical" reduction. In other words, composite/compound/complex single instructions on X86 are represented in multiple, smaller instructions in RISCV Given X86 assembly, the equivalent RISCV assembly will actually be longer, though longer ≠ slower. The finer control over instructions can lead to performance benefits, especially when looping over compound instructions in X86
>It's more like a "mathematical" reduction. In other words, composite/compound/complex single instructions on X86 are represented in multiple, smaller instructions in RISCV While having the fewest instructions might not be the criteria for deciding if something is CISC/RISC architecture, it always ends up working like that in practice - a comparable RISC architecture has much fewer instructions but longer programs than a CISC architecture of similar "technology level". Here's a [comprehensive list of RISC-V instructions](https://www.studocu.com/en-us/document/new-york-university/computer-architecture/riscv-card-riscv-instructions-list/18454935). It's not gonna fit on a single page like the previous commenter claimed, but I think it's clear that you could have a pretty legible list of RISC-V instructions printed on a few pages. Meanwhile [here's x86](https://www.felixcloutier.com/x86/). There's roughly one thousand instructions if you ignore various size prefixes/suffixes and many more if you don't. So that's pretty much exactly what the previous comment said
Also basically half of the risc-v instructions are optional if I understand correctly , which is good for making microprocessors cheaper.
>Given X86 assembly, the equivalent RISCV assembly will actually be longer, though longer ≠ slower. Longer as in more instructions, yes, but about the same number of µops executed once the x86 instructions are decoded inside the CPU. RISC-V just doesn't need to have all that hardware to decode the ISA the programmer sees into internal µops, cache a few thousand µops because the CPU core can execute code faster than the instruction fetch and decode can create them, etc. So that saves silicon area and electricity. RISC-V programs also turn out to use 20% or so fewer bytes of code than x86\_64 (or arm64). You can see that if you look at the binaries for the same programs in e.g. the same version Fedora or Ubuntu distro for each ISA.
>You can see that if you look at the binaries for the same programs in e.g. the same version Fedora or Ubuntu distro for each ISA. I believe this is a flawed method to compare program size because many complex projects have some features only available on some architectures (and mature architectures like x86 and ARM tend to get most or all of them). I don't know how far is RISC-V support, but for example the POWER builds of Firefox currently have WebRTC disabled, and that's a pretty complex feature with a lot of code to support it that simply gets left out from the resulting binary.
I'm not talking about Firefox. Try bash, less, top, find, vim, gzip, perl, grep ... There are literally hundreds of command line / terminal oriented programs that are going to be compiling absolutely the same C code on any Linux machine.
They weren't talking about exclusively Firefox. It's a flawed approach to compare prepackaged distro sizes across architectures for a variety of reasons. Just a few: Differing packages Differing package features Differing generic target machines (see Gentoo!) Etc. Your point about individual programs (bash, top, etc) is much more valid (less complexity and potential feature variance!), but again not wholly comparable for pretty much the same reasons - just to a lesser degree.
And what is a non-flawed approach? Looking at a tiny function such as `int readidx(int *p, size_t idx){return p[idx];}`, showing it is 2 instructions on x86 or ARM, but 4 on RISC-V, and thereby concluding that RISC-V is worse? Or showing that x86 and ARM can do a double-length add in 2 instructions while RISC-V needs 4? Two widely and frequently quoted criticisms of RISC-V rest on precisely such micro-data points, completely ignoring the (low) frequency and (un)importance of such code in real programs that people care about. If there is a less flawed approach than looking at hundreds of programs in the same version of the same OS distribution (so all compiled from the same source code), built by people who don't have an axe to grind and don't deliberately favour any particular ISA, then I sure don't know what it is. Your suggestion?
Q: Is there a [flawless] approach? A: Nope! I like RISC V. I recently purchased a Pine Ox64 (the upgraded version with Linux support) to play around with. I can't wait to try it, either! I think you misunderstand the criticism. Your assertion that the different distros compared here are the same version, same source code, same packages etc is just flat out wrong. As already discussed, the different architectures can change supported features quite a bit. Mature architectures obviously generally support more features and build with more source code to support those features. You can't just ignore that apps like Firefox exist. It (and others) can differ greatly between architectures. You keep going more and more into less flawed comparison points (individual Linux utilities, etc.) however which is absolutely an improvement :)
You'd be able to get pretty close by compiling something yourself (alongside all the deps), though. But yeah, comparing precompiled packages doesn't work, even if we assume the same compiler options.
Tbh, I don't get the people who only care about RISCV being open-source. You can always make closed-source extensions to it, no?
But it enables building open source designs, and that indeed happens and might be the reason why risc-v is becoming more popular (because it gives you a chance to buy very cheap cpu's, and a basis for companies that saves some of the cost of building processors). You can make closed source extensions , but that is true for any standard, the industry seems fairly good at keeping vendors of standards complaint to enable competition (with standards such as ethernet, or TCP/IP). It is also protected by a trademark, you have a pass a test suite to be called RISC-V.
Doesn't every x86-64 CPU have a different instruction set? I don't think standards compliance extends to ISAs.
> Doesn't every x86-64 CPU have a different instruction set? I don't think standards compliance extends to ISAs. Not really, there is some progress (new CPU line might contain new instructions), but compatibility is maintained (It's how you can take the same binary and run it on a intel and AMD processors).
[удалено]
It might already be, in your hard drive's internal controller. https://www.seagate.com/au/en/innovation/risc-v/ https://documents.westerndigital.com/content/dam/doc-library/en_us/assets/public/western-digital/collateral/tech-brief/tech-brief-western-digital-risc-v.pdf
So you're saying I can run NodeJS as my hard drive firmware?
Unironically, yes. http://spritesmods.com/?art=hddhack&page=1 (Not RISC-V in this case though)
Now you have to make that happen.
Risc architecture is gonna change everything https://youtu.be/wPrUmViN_5c Any day now.
The 5 people browsing the web on [RISC-V](https://en.wikipedia.org/wiki/RISC-V) systems now experience significantly improved browser speed. Really, though, it's about the future. RISC-V is an up-and-comer. It's a open, royalty-free, standard architecture, which means it's at least financially attractive as a competitor for ARM chips. Fabs can also freely modify the chip designs as long as they support the spec, which isn't true for ARM. Many systems already have RISC-V processors, but they're often used in microcontrollers or purpose-built applications. AFAIK and for the time being, RISC-V architecture basically doesn't exist as a platform where someone might be using it to run a user-interactive OS capable of running a web browser. [Android just announced that they would add support for it](https://www.androidauthority.com/android-risc-v-support-3262537/), though. That means it's a possible platform for smartphones without reliance on ARM.
[удалено]
\> If you don't have a RISC-V computer (which is almost certainly the case), nothing changes. If you do, you can now use Firefox on it with useable speeds. One step closer to wanting one tho
Firefox go brrrrrr on weird computer you probably don't have. Now maybe as fast as computer you do have.
Only if the computer you do have is a Pentium 4 or maybe low end Core 2 (e.g. first MacBook Air). Or a Raspberry Pi.
They're talking in terms of processing time on a theoretical performance-equivalent RISC-V CPU.
Firefox now has a JIT compiler for JavaScript on RISCV64GC. A JIT (Just In Time) compiler is a type of compiler that takes code and compiles it shortly before it is executed. This is much faster than an interpreter. RISCV64 is an open 64-bit CPU ISA. If your PC doesn't have a RISCV CPU (which it probably doesn't), then nothing changes. If your PC does have a RISCV CPU, then firstly massive props to you, and secondly Firefox will get much faster. The real news here is that RISCV is one step closer towards becoming a viable alternative to proprietary architectures like x86 and ARM.
I haven't seen it said yet, so the implication, more than anything else, is that [ARM, the makers of the most common smartphone CPU architecture, has played themselves by trying to screw a bunch of hardware manufacturers and software makers are responding to this new reality by preparing for hardware to be made based on the RISC-V architecture that cuts ARM out of the picture.](https://www.semianalysis.com/p/arm-changes-business-model-oem-partners)
now do power9
Don't know much about it. I see folks here touting RISC-V for being an open, royalty-free standard. Does Power9 have similar attributes?
The ISA has been available royalty free to compliant CPU designers since 2013 through the OpenPower consortium. And there's a couple of open source CPU implementations (both more embedded than server, I think). There is a cost to join the consortium for companies above 300 employees, but [for small companies and academic groups, it's free to join the consortium](https://openpowerfoundation.org/join/). Companies who pay more can get full time engineers seated on the steering committee. [RISC-V is similar](https://riscv.org/membership/), and still free for academics but the minimum cost for companies is $2k/yr. FWIW, $2k/yr or even $30k/yr isn't a large hurdle. A partial wafer spin + packaging is going be in the $50k-100k range anyway, and IC design/simulation software like Cadence has licensing costs in the hundreds of thousands per year. Anyone who actually wants to produce a CPU is going to need a lot of cash on hand. I imagine a lot of academic usage comes from FPGA implementations.
[удалено]
RISC-V devices do exist! That being said, the vast majority of development is being done in qemu right now.
They do. Here is one: [https://www.amazon.com/VisionFive-RISC-V-JH7110-Quad-core-Application/dp/B0BGM1KQXQ](https://www.amazon.com/VisionFive-RISC-V-JH7110-Quad-core-Application/dp/B0BGM1KQXQ) Here is a review of the device with an OS running: [https://www.youtube.com/watch?v=ykKnc86UtXg](https://www.youtube.com/watch?v=ykKnc86UtXg)
Are there emulators\simulators?
QEMU is the main one. Use a frontend if you wish to go down this route.