Windows Batch - Pretty Print JSON per Script
Solution 1
You will definitely need an external tool for that. I'm using the command-line JSON processor jq
- just pass it the Identity filter .
and it will pretty print the JSON it's been given as input.
Update: Corrected syntax (where did that come from?), Sorry @Delfic and @Sonata. Even though the empty filter ""
from my original answer works, I changed the filter like @Wolf said, as it is indeed mentioned in the manual as the "absolute simplest filter".
Example:
ECHO { "att1": 1, "att2": 2 } | jq .
Output:
{
"att1": 1,
"att2": 2
}
If you want to process JSON input directly from the shell, you can also use null input -n
:
jq -n "{ "att1" : 1, "att2" : 2 }"
To process a file, do:
jq . dirty.json
...or, if you're so inclined:
TYPE dirty.json | jq .
Solution 2
Super quick way to do it if you have python installed is by typing following in the command prompt or powershell:
type dirty.json | python -m json.tool > pretty.json
where dirty.json is minified or unreadable json and pretty.json is pretty one. you can even put it in a batch file to run it by passing an argument. Its very fast in processing.
Explanation:
type is writing the file content to the console but its piped so it is being sent to python's module json.tool and then '>' writes the output into a file called 'pretty.json'. Try without the > pretty.json
to see the output in the console.
Solution 3
Windows' cmd
capabilities are very limited and support for JSON certainly isn't one of them, so I would advice against creating a batch-script! I would highly recommend the command-line tool xidel though. It prettifies by default.
Stdin:
ECHO {"a":1,"b":2,"c":3} | xidel -s - -e "$json"
{
"a": 1,
"b": 2,
"c": 3
}
(If the input is JSON, xidel
will parse and assign it to the (internal) global variable $json
)
File/url:
xidel -s "input.json" -e "$json"
#or
xidel -s "https://[...]" -e "$json"
#or
xidel -se "json-doc('file-or-url')"
Write to file:
xidel -s "input.json" -e "$json" > output.json
#or
xidel -s "input.json" -e "file:write('output.json',serialize-json($json,{"indent":true()}))"
#or
xidel -se "file:write('output.json',serialize-json(json-doc("input.json"),{"indent":true()}))"
Process multiple files:
FOR %A IN (*.json) DO @xidel -s %A -e "$json" > %~nA_pretty.json
While this would work for lots of JSON-files in a directory, it's also extremely slow, because xidel
is called for each and every JSON-file.
xidel
can do this much more efficiently with the integrated EXPath File Module.
xidel -se "file:list(.,false,'json')"
This returns a bare list of all JSON-files in the current directory.
(Equivalent of DIR *.json /B
and FOR %A IN (*.json) DO @ECHO %A
)
xidel -se "file:list(.,false,'json') ! file:read-text(.)"
#or
xidel -se "for $x in file:list(.,false,'json') return file:read-text($x)"
Both commands return the content of each JSON-file in the current directory.
(Equivalent of FOR %A IN (*.json) DO @TYPE %A
)
xidel -se "file:list(.,false,'json') ! json-doc(.)"
#or
xidel -se "for $x in file:list(.,false,'json') return json-doc($x)"
Both commands return the prettified parsed JSON content of each JSON-file in the current directory.
(Equivalent of FOR %A IN (*.json) DO @xidel -s %A -e "$json"
, but much faster!)
xidel -se "file:list(.,false,'json') ! file:write(substring-before(.,'.json')||'_pretty.json',serialize-json(json-doc(.),{'indent':true}))"
#or
xidel -se "for $x in file:list(.,false,'json') return file:write(substring-before($x,'.json')||'_pretty.json',serialize-json(json-doc($x),{'indent':true}))"
Both commands write the prettified parsed JSON content of each JSON-file in the current directory to a new file with a filename which ends with "_pretty".
(serialize-json()
is needed to serialize/stringify the (prettified) JSON before writing it to a file)
The final command prettified:
xidel -se "
for $x in file:list(.,false,'json') return
file:write(
substring-before($x,'.json')||'_pretty.json',
serialize-json(
json-doc($x),
{'indent':true}
)
)
"
Please download a xidel
binary from the recommended "Development"-branch to use these queries (and json-doc()
in particular).
If you insist on using xidel 0.9.8
, then use json(file:read-text($x))
instead of json-doc($x)
.
user5417542
Updated on September 23, 2022Comments
-
user5417542 over 1 year
I have multiple JSON-files on a server, which can only run batch-scripts or command-line tools. The JSON files all aren't formatted properly, meaning there are no tab-spaces at the beginning of lines.
Is there a way, to write a batch-script or run a command-line tool to format the files, so they are normal again?
-
Wolf almost 7 yearsThe filter
.
is recommended in thejq
documentation, it also does the prettifying implicitly. This option is much easier to provide in Windows command lines. -
Wolf almost 7 yearsThis doesn't work reliably whereas
jq
does. I failed when processing a Pandoc output. Right in the first lines, I observed data loss. So this suggestion is counterproductive when you wish to just format JSON. -
dina almost 7 yearsDo you have any examples of the output you tried formating? I dont think there will be data loss with the approach I mentioned.
-
Wolf almost 7 yearsI converted a Markdown document into JSON via Pandoc. When reformatting, the python json.tool removed the Pandoc meta data.
-
Bob Stine over 6 yearsI like this answer: I prefer using python to installing a dedicated pretty printer. And I have a question: is JSON with metadata still JSON?
-
Delfic about 6 yearsThis does not work. Also with larger file sizes jq just crashes.
-
Sonata almost 4 years@zb226 didn't work for me either. The output is just a single line with JSON. This does work however:
ECHO {"att1": 1, "att2": 2 } | jq "."
-
zb226 almost 4 yearsThis answer has its merits, when
jq
is not available, butpython
is. -
Daniel Liuzzi about 3 years+1 Excellent! If you use Scoop, jq is available in the main bucket. It installs in seconds with
scoop install jq
. -
user2602807 over 2 yearsit's so slow (( 1-2 seconds for my json with 40 lines
-
zb226 over 2 years@user2602807 Can't confirm.
jq
will process the Penn historic dataset, which is just under 1.5MB, in 0.3s on my dated machine. Note that outputting to the console may be very slow on Windows, depending on the version - try redirecting into a file by appending> pretty.json
to the command.