How to initialize an array using awk and bash?
Solution 1
You can't do it precisely this way. You have to let the awk command execute, creating a series of output values delimited by some delimiter (space, tab, NUL or something), then assign these to the array elements. The reason for this is that awk is a program that is run by the shell and it runs to completion before any more shell operations execute. You can't embed shell stuff in your awk stuff like this.
You can do something like the below to use your simple example, but I'm not certain the awk code you supplied will do what you want in any case. If a file has zero records or more than one record, the output (and, thus, the shell array) will contain the lines of the file. It looks like you want to have the output array contain a set of integers from zero to the number of fields you find in a file if-and-only-if it contains only one record. If you wish to avoid having larger files show up in the array, you will have to add a second patternless action in the awk code containing only "{ }" as the action as I have done below. Also, turn off filename wildcard expansion (set -f
) in case the line contains wildcard characters \[*?
.
set -f
arr_values=(`awk '
NR==1 {
for (i=0; i<=NF; i++)
print $i
}
{ }' file.txt`)
set +f
for ((i=0; i<${#arr_values[@]}; i++))
do
echo "${arr_values[i]}"
done
Solution 2
You might get better answers if you explained the context as to why awk
... and what the first line of data looks like.
Since you haven't changed the field separator, I'm assuming that you are processing a line of items separated by spaces. The following will do exactly what you are looking for:
arr_values=()
read -ra arr_values <<< $(head -n 1 file.txt)
for s in "${arr_values[@]}"; do
echo "$s"
done
All the work is done in the read
command. Unless you have a need to process the values in an array multiple times; then, you could simply do:
set -f # disable globbing, in case the lines contain wildcard characters
for s in $(head -n 1 file.txt); do
set +f
echo "$s"
done
set +f
This will put each non-blank field entry that is separated by blanks into $s
and make one pass through the loop.
Solution 3
If you want to use awk
, you can do like this:
arr_values=( $(awk '{print;exit}' file) )
The shell will do the splitting on whitespace. Note that if the line contains wildcards, they will be expanded to the list of matching files (if there are any). Use set -f
to disable wildcard expansion.
set -f
arr_values=( $(awk '{print;exit}' file) )
set +f
Related videos on Youtube
Redson
Updated on September 18, 2022Comments
-
Redson over 1 year
I am trying to store values of the first line of a text file into an array. Here is what I have so far:
arr_values=() awk ' NR==1 { for (i=0; i<=NF; i++) 'arr_values[i]'=$i #error here }' file.txt for ((i=0; i<${#arr_values[@]}; i++)) do echo ${arr_values[i]} done
I am getting an error with initializing the array mainly because I don't know how to use
awk
to initialize an external array. Any suggestions (only withawk
)? Thanks.Apologize for the cross post, I didn't get the answer I wanted.
-
0xC0000022L almost 10 yearsGNU
awk
?mawk
? You should specify it. There are literally at least a dozen flavors that differ in subtle ways.
-
-
Redson almost 10 yearsIts not what I'm looking for but thanks for the taking the time to answer my question
-
mikeserv almost 10 yearsYou can set the
bash
arrays in the same way, and you can change how they split withIFS=${split}
. -
0xC0000022L almost 10 yearsWhy not
BEGIN
? Did you consider the difference betweenFNR
andNR
also ... especially when the AWK script is told to process several input files?! -
cuonglm almost 10 years@0xC0000022L: Why
BEGIN
? InBEGIN
, no record is read. And the OP seem only process one file. -
0xC0000022L almost 10 yearsthen
FNR
is still the better option, unless this is a prototype. But as we all know prototypes tend to end up in production down the line. -
cuonglm almost 10 years@0xC0000022L: Agree. In this case, I use
NR
because it still right and shorter thanFNR
:)