Replies: 5 comments 3 replies
-
Okay thanks @bennycode, i missed the discussion panel for this kind of proposal. Here napi rs doc: https://napi.rs/ Based on some readings on google and this blog post: https://www.lohika.com/node-js-n-api-implementation-and-performance-comparison And also NextJS experience saying Since this lib performs mainly mathematical calculations ,on NodeJs side using Rust could run may be 10-20x faster. |
Beta Was this translation helpful? Give feedback.
-
@Moumouls, my friend @ffflorian and I made some experiments and improved the performance of Wire's desktop app using Rust in 2017. In order to gain performance, you have to make sure that the communication between React and JS is less complex than doing the calculation entirely in JS. Otherwise the communication overhead will ruin all performance you hope to gain. I can imagine that it will be beneficial to use Rust behind the scenes for functions that take a lot of candles at once (like SMA.getResultFromBatch). You can give it a try, it would be great to have faster implementations that follow the same interface. I know that the tulind library achieved something similar by providing C++ bindings for the famous Tulip Indicators. |
Beta Was this translation helpful? Give feedback.
-
So @bennycode i'm back with my tiny experiment on EMA. I implemented the EMA class in Rust then i exposed the Rust EMA to JS. Here the result The test: fit('calculates the Exponential Moving Average over a period of 50', () => {
const ema = new EMA(50);
const fasterEMA = new FasterEMA(50);
const rsEMA = new RSEMA(50);
const randomArray = Array.from({length: 5000}).map(() => Math.random());
const emas = [ema, fasterEMA, rsEMA];
emas.forEach((e, i) => {
console.time(i.toString());
randomArray.forEach(v => {
e.update(v);
});
console.timeEnd(i.toString());
console.log(`Result ${i}: ${e.getResult().toFixed(64)}`);
});
}); The result:
The precision seems to be the same at this point Currently the EMA Rust use internally EMA Rust is 3,5 times faster than FasterEMA at same precision The performance gain on the simple EMA use case is significant, the lib on server side could achieve massive improved performance. It seems that
Here my Rust code #![deny(clippy::all)]
use std::cell::Cell;
#[macro_use]
extern crate napi_derive;
pub struct InternalEMA {
pub prices_counter: Cell<f64>,
pub weight_factor: f64,
pub result: Cell<f64>,
pub interval: f64,
pub initialized: Cell<bool>,
}
impl InternalEMA {
pub fn new(interval: f64) -> Self {
Self {
prices_counter: Cell::new(0.0),
weight_factor: (2.0 / (interval + 1.0)),
result: Cell::new(0.0),
initialized: Cell::new(false),
interval,
}
}
pub fn update(&self, price: f64) -> f64 {
self.prices_counter.set(self.prices_counter.get() + 1.0);
if !self.initialized.get() {
self.result.set(price);
self.initialized.set(true);
}
let result = price * self.weight_factor + self.result.get() * (1.0 - self.weight_factor);
self.result.set(result);
self.result.get()
}
pub fn get_result(&self) -> f64 {
if self.prices_counter.get() < self.interval {
return 0.0;
}
self.result.get()
}
}
#[napi(js_name = "EMA")]
pub struct EMA {
engine: InternalEMA,
}
#[napi]
impl EMA {
#[napi(constructor)]
pub fn new(interval: f64) -> Self {
Self {
engine: InternalEMA::new(interval),
}
}
#[napi]
pub fn update(&self, price: f64) -> f64 {
self.engine.update(price)
}
#[napi]
pub fn get_result(&self) -> f64 {
self.engine.get_result()
}
} |
Beta Was this translation helpful? Give feedback.
-
Here some DEMA result
NAPI-RS is 4,8 times faster than JS At this point i think NAPI-RS implementation will be even more faster compared to JS in more complex calculations. |
Beta Was this translation helpful? Give feedback.
-
Here MACD Result
It looks interesting; the performance for MACD is now just 2 times faster than JS. Okay i found the bottleneck, the NAPI translation for objects returns looks to take some times. I implemented a method
NAPI-RS is 9 times faster than JS if rust do not return object at each update |
Beta Was this translation helpful? Give feedback.
-
!!! Idea canceled after some benchmarking here: #393
I found that the pure JS version is faster with same precision.
Hi @bennycode,
I currently use your package into a project where i perform many calculations.
Since this lib do heavy calculations, here i think that switching the fast implementation from native number to a Rust with Napi RS could increase performance significantly.
Here i would like to get your point of view about this idea applied to your lib.
As an open source contributor and curious about the new trend about making some NodeJS lib faster with Rust, i found that your lib could be a perfect project to learn Node to Rust stuff in a useful way for me.
Feel free to tell me what you think about this ?
I'll be happy to take a look and check architectural details to try/benchmark a first part of the lib.
Beta Was this translation helpful? Give feedback.
All reactions