how to share environment variables between shells like globals related to a master pid?

13,353

Solution 1

You can write the environment to a file with export -p and read it back periodically in the other instances with . (the include command). But as you note, if there are concurrent modifications, the result won't be pretty.

If you want modifications to propagate instantly, you'll have to use a wrapper around the export builtin that also propagates the changes to the other shells. On Linux, you can use the flock utility.

global_export () {
  {
    flock 0
    . /path/to/env
    [ $# -eq 0 ] || export "$@"
    export -p >/path/to/env
  } </path/to/env
}

Note that you'll need to call global_export whenever you assign a new value to an exported variable.

This is rather clumsy. It's likely that there's a better way to solve your problem. The most obvious way is to have whatever command is using those environment variable read a configuration files instead.

Solution 2

With the fish shell, use universally scoped variables:

set -U var value

Of course, that only works across fish shells run by the same user.

If you want to tie it to a master pid, include it in the variable name

set -U m${masterpid}var value

Solution 3

In addition to just using flock from Giles' answer, I'd do something similar to the following:

Register a signal handler to reload the file:

reload_env() {
 flock -s 3
 . /path/to/env
 flock -u 3
}
trap reload_env USR1

A function to write out the file, and send a SIGUSR1 to all the shells notifying them to reload the file:

export_env() {
 flock 3
 export -p >/path/to/env
 flock -u 3
 fuser -k -USR1 /path/to/env
}

Open the file, but do nothing with it. This is just so that the fuser will be able to signal this shell:

exec 3>>/path/to/env

The order is somewhat important. At least the exec ... part must be after the trap ... part. Otherwise we could open the file and then get signaled by the fuser call before we've registered our signal handler.

NOTE: Code is completely untested, but the principle stands.
NOTE2: This example code is targeted at bash.

Solution 4

If ksh93 is an option, there is definitely a way to implement what you want with getter and setter discipline functions that would retrieve/store the variable value from/to a shared storage.

See https://stackoverflow.com/questions/13726293/environment-variables-to-be-used-across-multiple-korn-ksh93-shell-scripts-get for details

Unlike the otherwise useful fish universal variables feature, you have access to the implementation so you can restrict the value sharing to a group of processes, linked to a master pid through some mechanism in your use case, while keeping a single name to the variable.

Solution 5

I would like to point you to named pipes for this purpose. Here is one neat example to consider.

Named pipes are created by the command:

mkfifo pipe_name

You can simply write to it by:

echo "This text goes to the pipe" > pipe_name

On the other hand, you can read from the pipe as:

read line < pipe_name;
echo $line

You can create several pipes based on how many shells do you want them to communicate. The good things about pipes is that they are not just variables. You can actually pass any data between the processes. You can pass whole files, lines of texts, whatever you wish. I believe, with this in mind, you can simplify your scripts a lot. I can imagine, if your variables point to a location of data, or a file, and the other script will have to read or retrieve it, you can directly pass the data/file without even needing a variable.

Share:
13,353

Related videos on Youtube

Aquarius Power
Author by

Aquarius Power

"truth is a pathless land" - read Jiddu Krishnamurti

Updated on September 18, 2022

Comments

  • Aquarius Power
    Aquarius Power almost 2 years

    So, I need to share environment variables between shells, and that the changes on the variables be promptly recognized by all shells, like global variables. But they must be related to a master PID, so I can have 2 shell scripts running, each being a master with same global variable name, and it children will only change the variable related to their master PID.

    I use bash for scripting.

    My current researches are these:

    Modules
    I read I could use modules project for that: http://modules.sourceforge.net/, but I am having huge trouble trying to do it, it seems to lack examples? also "modules" is a too generic word, very hard to google about because mixes with many other things… after some weeks my hands are still empty…

    Simple text source file
    I read also I should use a simple text file to store/update/read variables from, as a source script file, using flock too to avoid concurrent writing. But there are problems related to doing that, like concurrent change of variable simultaneously by 2 shells.

    Alternatively
    I am aware that after a "global" variable is read by the shell, it will keep that value until another read is made; and another shell may change it in the mean while too…
    May be a very fast external application could provide such storage and we would not try to use environment variables as globals then? but my guess is any SQL solution will be too slow…

    • Admin
      Admin almost 11 years
      This smells like an XY problem. Environment variables are per-process, so if you want system-wide values, you're using the wrong tool. What problem are you trying to solve?
    • Admin
      Admin almost 11 years
      I have a master script that spawns 5 child scripts with xterm, each doing different but related tasks; they all need to know what is going-on on the master script, or even on each other, and also be able to report back, or prepare something to the master if needed.
    • Admin
      Admin almost 11 years
      Ok, environment variables are completely the wrong tool for that. You should probably be using something that is intended for interprocess communication, such as pipes. Either have the master process broadcast changes to the children (but behave of blocking if the children don't respond), or have the children periodically query the master for updates. A file watched by inotify may also be an option. Or maybe you should have a single program that outputs to all the different terminals.
    • Admin
      Admin almost 11 years
      Why do you need concurrent writing? What is wrong with one process waiting a few milliseconds for the other to finish updating the file?
    • Admin
      Admin almost 11 years
      @psusi flock or using ln -s realfile lockfile fail exit status will just do what you said "wait a few mili ...", without lock they wont wait on trying to update; but that concurrent update attempt has a flaw; one process may overwrite other process latest update, I still need to prepare a test case to work on it...
    • Admin
      Admin almost 11 years
      Yes... so the question was what is wrong with using the lock?
  • Aquarius Power
    Aquarius Power almost 11 years
    that is very interesting as I saw, you just redirect the way it set and get the variable value; indeed I didnt restrict for bash... well, good now I know about it; I wonder if bash could have something similar some day..
  • Aquarius Power
    Aquarius Power almost 11 years
    cool! interesting how the 1st terminal waited I issue the read command on the other. What I see initially is that this is a "one to one" communication, and it freezes the 1st terminal until the 2nd reads the pipe, and also I need a "Many to Many" communication (many can read, many can write, simultaneously but synchronized); but indeed this new knowledge can be useful thx!
  • jlliagre
    jlliagre almost 11 years
    It might eventually. Discipline functions are in the bash wish list. bash uses to pick later features ksh first implemented.