What's the most efficient node.js inter-process communication library/method?
Solution 1
If you want to send messages from one machine to another and do not care about callbacks then Redis pub/sub is the best solution. It's really easy to implement and Redis is really fast.
First you have to install Redis on one of your machines.
Its really easy to connect to Redis:
var client = require('redis').createClient(redis_port, redis_host);
But do not forget about opening Redis port in your firewall!
Then you have to subscribe each machine to some channel:
client.on('ready', function() {
return client.subscribe('your_namespace:machine_name');
});
client.on('message', function(channel, json_message) {
var message;
message = JSON.parse(message);
// do whatever you vant with the message
});
You may skip your_namespace
and use global namespace, but you will regret it sooner or later.
It's really easy to send messages, too:
var send_message = function(machine_name, message) {
return client.publish("your_namespace:" + machine_name, JSON.stringify(message));
};
If you want to send different kinds of messages, you can use pmessages instead of messages:
client.on('ready', function() {
return client.psubscribe('your_namespace:machine_name:*');
});
client.on('pmessage', function(pattern, channel, json_message) {
// pattern === 'your_namespace:machine_name:*'
// channel === 'your_namespace:machine_name:'+message_type
var message = JSON.parse(message);
var message_type = channel.split(':')[2];
// do whatever you want with the message and message_type
});
send_message = function(machine_name, message_type, message) {
return client.publish([
'your_namespace',
machine_name,
message_type
].join(':'), JSON.stringify(message));
};
The best practice is to name your processes (or machines) by their functionality (e.g. 'send_email'
). In that case process (or machine) may be subscribed to more than one channel if it implements more than one functionality.
Actually, it's possible to build a bi-directional communication using redis. But it's more tricky since it would require to add unique callback channel name to each message in order to receive callback without losing context.
So, my conclusion is this: Use Redis if you need "send and forget" communication, investigate another solutions if you need full-fledged bi-directional communication.
Solution 2
Why not use ZeroMQ/0mq for IPC? Redis (a database) is over-kill for doing something as simple as IPC.
Quoting the guide:
ØMQ (ZeroMQ, 0MQ, zmq) looks like an embeddable networking library but acts like a concurrency framework. It gives you sockets that carry atomic messages across various transports like in-process, inter-process, TCP, and multicast. You can connect sockets N-to-N with patterns like fanout, pub-sub, task distribution, and request-reply. It's fast enough to be the fabric for clustered products. Its asynchronous I/O model gives you scalable multicore applications, built as asynchronous message-processing tasks.
The advantage of using 0MQ (or even vanilla sockets via net library in Node core, minus all the features provided by a 0MQ socket) is that there is no master process. Its broker-less setup is best fit for the scenario you describe. If you are just pushing out messages to various nodes from one central process you can use PUB/SUB socket in 0mq (also supports IP multicast via PGM/EPGM). Apart from that, 0mq also provides for various different socket types (PUSH/PULL/XREP/XREQ/ROUTER/DEALER) with which you can create custom devices.
Start with this excellent guide: http://zguide.zeromq.org/page:all
For 0MQ 2.x:
http://github.com/JustinTulloss/zeromq.node
For 0MQ 3.x (A fork of the above module. This supports PUBLISHER side filtering for PUBSUB):
http://github.com/shripadk/zeromq.node
Solution 3
More than 4 years after the question being ask there is an interprocess communication module called node-ipc. It supports unix/windows sockets for communication on the same machine as well as TCP, TLS and UDP, claiming that at least sockets, TCP and UDP are stable.
Here is a small example taken from the documentation from the github repository:
Server for Unix Sockets, Windows Sockets & TCP Sockets
var ipc=require('node-ipc');
ipc.config.id = 'world';
ipc.config.retry= 1500;
ipc.serve(
function(){
ipc.server.on(
'message',
function(data,socket){
ipc.log('got a message : '.debug, data);
ipc.server.emit(
socket,
'message',
data+' world!'
);
}
);
}
);
ipc.server.start();
Client for Unix Sockets & TCP Sockets
var ipc=require('node-ipc');
ipc.config.id = 'hello';
ipc.config.retry= 1500;
ipc.connectTo(
'world',
function(){
ipc.of.world.on(
'connect',
function(){
ipc.log('## connected to world ##'.rainbow, ipc.config.delay);
ipc.of.world.emit(
'message',
'hello'
)
}
);
ipc.of.world.on(
'disconnect',
function(){
ipc.log('disconnected from world'.notice);
}
);
ipc.of.world.on(
'message',
function(data){
ipc.log('got a message from world : '.debug, data);
}
);
}
);
Im currently evaluating this module for a replacement local ipc (but could be remote ipc in the future) as a replacement for an old solution via stdin/stdout. Maybe I will expand my answer when I'm done to give some more information how and how good this module works.
Solution 4
i would start with the built in functionality that node provide.
you can use process signalling like:
process.on('SIGINT', function () {
console.log('Got SIGINT. Press Control-D to exit.');
});
this signalling
Emitted when the processes receives a signal. See sigaction(2) for a list of standard POSIX signal names such as SIGINT, SIGUSR1, etc.
Once you know about process you can spwn a child-process and hook it up to the message
event to retrive and send messages. When using child_process.fork()
you can write to the child using child.send(message, [sendHandle])
and messages are received by a 'message' event on the child.
Also - you can use cluster. The cluster module allows you to easily create a network of processes that all share server ports.
var cluster = require('cluster');
var http = require('http');
var numCPUs = require('os').cpus().length;
if (cluster.isMaster) {
// Fork workers.
for (var i = 0; i < numCPUs; i++) {
cluster.fork();
}
cluster.on('exit', function(worker, code, signal) {
console.log('worker ' + worker.process.pid + ' died');
});
} else {
// Workers can share any TCP connection
// In this case its a HTTP server
http.createServer(function(req, res) {
res.writeHead(200);
res.end("hello world\n");
}).listen(8000);
}
For 3rd party services you can check: hook.io, signals and bean.
Solution 5
take a look at node-messenger
https://github.com/weixiyen/messenger.js
will fit most needs easily (pub/sub ... fire and forget .. send/request) with automatic maintained connectionpool
DuduAlul
Updated on July 05, 2022Comments
-
DuduAlul almost 2 years
We have few node.js processes that should be able to pass messages, What's the most efficient way doing that? How about using node_redis pub/sub
EDIT: the processes might run on different machines
-
DuduAlul almost 13 yearsnone, I would like to get a sense of what should I try..what are the common possibilities?
-
DuduAlul almost 13 yearswell, I am looking for a library , how about redis(pub/sub)?
-
Raynos almost 13 yearsinter process communication across machines has to be done over sockets. You can do it through a database like redis but that has to go over the network. UDP is going to be the most efficient.
-
Shripad Krishna over 11 yearsUDP is unreliable (there is duplication of packets, packet ordering is not guaranteed) and is not fit for the scenario he describes. Its good for stuff like hearbeats, DNS, streaming or implementing your own protocol.
-
UpTheCreek over 11 yearsThere's a good discussion here: groups.google.com/forum/?fromgroups=#!topic/nodejs/Pxbb_kgOQEs
-
BJ Bradley over 11 yearsAre you looking to send point to point, broadcast, or both? Any concern about reliability of delivery?
-
-
Evgeniy Berezovsky almost 11 yearsmessenger's claim it supports pub/sub is a bit of an exaggeration: Every subscriber has to subscribe to a different TCP port, and the publisher has to know all these ports. Defeats the purpose of pub/sub.
-
Gabriel Llamas over 10 years-1: "the processes might run on different machines". Node have a built-in channel between a process and their childs, same machine. The OP needs to communicate 2 DIFFERENT processes from DIFFERENT machines.
-
NiCk Newman over 8 yearsGreat answer. I'm just a bit worried about the performance hit of
json.parse
andjson.stringify
. I'm utilizing nodejs for my gameserver and are using 3, 4, and even more node instances all to communicate with Redis (so I can do horizontal scaling) -- and it's an aRPG game I am developing so for example attacking a mob, moving, and all that stuff it's going to be extremely busy. Would it still be fine? Or am I border-line preMatureOptimization thinking right now? Thanks -
shashi over 8 yearsHow was your experience with node-ipc?
-
higginsrob over 8 years@shashi, I started playing with node-ipc an hour ago and can say it is awesome. I found it easy to setup two node processes talking to each other over a unix socket.
-
shashi about 8 years@higginsrob Yes, I started using it as well and till now its been impressive!
-
morten.c about 8 years@shashi Sorry for my late response, got little spare time in the last few days. I tested it with socket and tcp as well, works without any glitches. If I got a little bit more time at the end of the week, I will update my answer to reflect my experiences.
-
NiCk Newman about 8 yearsHow is node-ipc compared to ZeroMQ? Which one would be faster?
-
Neo over 6 yearsIn my testing node-ipc can send and receive about 15k messages per second. The built in child_process fork functionality can do about 25k per second. This test was ran on Windows using a 3.5GHz intel i5.
-
Neo over 6 yearsI just ran another test using the ws library to communicate between processes over websocket. I was able to get 75K. The key is not to do... on message -> send message this will only give 20K. Instead just fire away using a loop inside of a setInterval function. (You will eventually reach a climax where going higher will result in less throughput.)
-
Playdome.io over 6 yearsThis takes years to set up. And its difficult to maintain. On top of all its not node.js it only has a node wrapper that makes it even more difficult to fix problems
-
pcnate almost 5 yearsZeroMQ docs are difficult to read.... the thing took me days to get functioning and I'm sure I'll have to relearn it to troubleshoot anything
-
Lucio Paiva almost 5 yearsHonestly, I don't like it. node-ipc messes with the String prototype (things like
'## connected to world ##'.rainbow
that you can see in the example above) and its interface is kind of outdated for 2019 standards. -
stackuser83 over 4 yearsthe 'different machines' requirement was added as an edit, the builtin process signaling is at least somewhat relevant to the question answering
-
Wayne Bloss over 2 yearsI was surprised that a library which gets a million weekly downloads is modifying the String prototype. I couldn't find the code responsible for it though, so I asked them here - github.com/RIAEvangelist/node-ipc/issues/227
-
Wayne Bloss over 2 yearsGithub question answered (by me): It's just using the colors NPM package in the example code only, to provide the
'string'.rainbow
field. Those string modifications are not part ofnode-ipc
.