Batch convert encoding in files

100,209

Solution 1

Cygwin or GnuWin32 provide Unix tools like iconv and dos2unix (and unix2dos). Under Unix/Linux/Cygwin, you'll want to use "windows-1252" as the encoding instead of ANSI (see below). (Unless you know your system is using a codepage other than 1252 as its default codepage, in which case you'll need to tell iconv the right codepage to translate from.)

Convert from one (-f) to the other (-t) with:

$ iconv -f windows-1252 -t utf-8 infile > outfile

Or in a find-all-and-conquer form:

## this will clobber the original files!
$ find . -name '*.txt' -exec iconv --verbose -f windows-1252 -t utf-8 {} \> {} \;

Alternatively:

## this will clobber the original files!
$ find . -name '*.txt' -exec iconv --verbose -f windows-1252 -t utf-8 -o {} {} \;

This question has been asked many times on this site, so here's some additional information about "ANSI". In an answer to a related question, CesarB mentions:

There are several encodings which are called "ANSI" in Windows. In fact, ANSI is a misnomer. iconv has no way of guessing which you want.

The ANSI encoding is the encoding used by the "A" functions in the Windows API (the "W" functions use UTF-16). Which encoding it corresponds to usually depends on your Windows system language. The most common is CP 1252 (also known as Windows-1252). So, when your editor says ANSI, it is meaning "whatever the API functions use as the default ANSI encoding", which is the default non-Unicode encoding used in your system (and thus usually the one which is used for text files).

The page he links to gives this historical tidbit (quoted from a Microsoft PDF) on the origins of CP 1252 and ISO-8859-1, another oft-used encoding:

[...] this comes from the fact that the Windows code page 1252 was originally based on an ANSI draft, which became ISO Standard 8859-1. However, in adding code points to the range reserved for control codes in the ISO standard, the Windows code page 1252 and subsequent Windows code pages originally based on the ISO 8859-x series deviated from ISO. To this day, it is not uncommon to have the development community, both within and outside of Microsoft, confuse the 8859-1 code page with Windows 1252, as well as see "ANSI" or "A" used to signify Windows code page support.

Solution 2

With PowerShell you can do something like this:

Get-Content IN.txt | Out-File -encoding ENC -filepath OUT.txt

While ENC is something like unicode, ascii, utf8, and utf32. Check out 'help out-file'.

To convert all the *.txt files in a directory to UTF-8, do something like this:

foreach($i in ls -name DIR/*.txt) { \
    Get-Content DIR/$i | \
    Out-File -encoding utf8 -filepath DIR2/$i \
}

which creates a converted version of each .txt file in DIR2.

To replace the files in all subdirectories, use:

foreach($i in ls -recurse -filter "*.java") {
    $temp = Get-Content $i.fullname
    Out-File -filepath $i.fullname -inputobject $temp -encoding utf8 -force
}

Solution 3

Oneliner using find, with automatic detection

The character encoding of all matching text files gets detected automatically and all matching text files are converted to UTF-8 encoding:

$ find . -type f -iname *.txt -exec sh -c 'iconv -f $(file -bi "$1" |sed -e "s/.*[ ]charset=//") -t utf-8 -o converted "$1" && mv converted "$1"' -- {} \;

To perform these steps, a sub shell sh is used with -exec, running a one-liner with the -c flag, and passing the filename as the positional argument "$1" with -- {}. In between, the UTF-8 output file is temporarily named converted.

The find command is very useful for such file management automation.

Click here for more find galore.

Solution 4

UTFCast is a Unicode converter for Windows which supports batch mode. I'm using the paid version and am quite comfortable with it.

UTFCast is a Unicode converter that lets you batch convert all text files to UTF encodings with just a click of your mouse. You can use it to convert a directory full of text files to UTF encodings including UTF-8, UTF-16 and UTF-32 to an output directory, while maintaining the directory structure of the original files. It doesn't even matter if your text file has a different extension, UTFCast can automatically detect text files and convert them.

Solution 5

In my use case, I needed automatic input encoding detection and there there was a lot of files with Windows-1250 encoding, for which command file -bi <FILE> returns charset=unknown-8bit. This is not a valid parameter for iconv.

I have had the best results with enca.

Convert all files with txt extension to UTF-8

find . -type f -iname *.txt -exec sh -c 'echo "$1" && enca "$1" -x utf-8' -- {} \;
Share:
100,209

Related videos on Youtube

desolat
Author by

desolat

Updated on September 17, 2022

Comments

  • desolat
    desolat over 1 year

    How can I batch-convert files in a directory for their encoding (e.g. ANSI → UTF-8) with a command or tool?

    For single files, an editor helps, but how can I do the mass files job?

  • Sony Santos
    Sony Santos about 10 years
    dos2unix is useful to convert line breaks, but the OP is looking for converting character encodings.
  • sylbru
    sylbru over 9 years
    Don't use the same filename as input and output! iconv seems to truncate files to 32,768 bytes if they exceed this size. As he writes in the file he's trying to read from, he manages to do the job if the file is small enough, else he truncates the file without any warning...
  • Orsinus
    Orsinus about 9 years
    Converting from ANSI to UTF via your first proposal does erase the whole content of my textfile...
  • akira
    akira about 9 years
    @Acroneos: then you made a mistake: the in-file is IN.txt, the outfile is OUT.txt ... this way it is impossible to overwrite the original. if you used the same filename for IN.txt and OUT.txt then you overwrite the file you are reading from, obviously.
  • Scott McIntyre
    Scott McIntyre about 8 years
    FYI This question is tagged with osx and it doesn't look like either of the convert-all commands work on Yosemite or El Cap. The iconv version Apples ships doesn't support --verbose or -o, and the other syntax redirecting stdout doesn't work for some reason and just sends it to regular stdout.
  • Uwe Keim
    Uwe Keim almost 8 years
    Seems they cannot convert into the same folder, only into another destination folder.
  • pparas
    pparas almost 7 years
    Powershell will convert to UTF with BOM. find and iconv might be much easier.
  • SherylHohman
    SherylHohman over 5 years
    The pro version allows in-place conversion. $20/3months. rotatingscrew.com/utfcast-version-comparison.aspx
  • SherylHohman
    SherylHohman over 5 years
    Oh, express (free) version is useless - it only "Detects" utf-8 WITH BOM !! (everyone can do that). Only Pro version that Auto-Renews every 3 months at $20 a pop, will auto-detect. Price is steep for a non-enterprise user. AND Beware if you try the basic version, and your file is already utf-8 (without BOM), then this converter will detect it as ASCII, then (re-)"convert" it to utf-8, which could result in gibberish. Be Aware if this before trying the express version! They have a demo version for the pro that produces no output - pointless IMHO cuz can't verify results before buying!
  • mwfearnley
    mwfearnley over 4 years
    I'm guessing original_charset is just a placeholder here, not actually the magical "detect my encoding" feature we all might hope for.
  • Gwyneth Llewelyn
    Gwyneth Llewelyn about 4 years
    Dang... I wish your answer wasn't that deeply buried at the bottom! enca is really useful, and way easier to use... when it works. Then again, other solutions fail, too...
  • Gwyneth Llewelyn
    Gwyneth Llewelyn about 4 years
    This has the advantage of not requiring the -o option which is not available on some flavours of iconv (namely, macOS, and I suspect FreeBSD as well). On the other hand, the for loop is non-trivial to create if you require it to transverse a deep tree structure of directories...
  • phuclv
    phuclv almost 4 years
    @pparas that's wrong. Commands related to text files like Out-File, Get-Content, Set-Content... all have an -Encoding parameter which allows utf8BOM or utf8NoBOM. iconv is much worse in this regard because it never supports UTF-8 with BOM
  • Admin
    Admin almost 4 years
    this is new line conversion and has nothing to do with encoding conversion
  • phuclv
    phuclv almost 4 years
    \ is not an escape character in powershell so putting it at the end of each line won't work
  • Peter Mortensen
    Peter Mortensen almost 4 years
    What about Python 3?
  • djjeck
    djjeck over 3 years
    This works on Mac: find . -type f -iname "*.txt" -exec sh -c 'iconv -f windows-1252 -t utf-8 "$1" > converted && mv converted "$1"' -- "{}" \;, to convert from ANSI
  • John
    John over 3 years
    This looked like exactly what I was hoping for (a GUI) though no drag-and-drop (you have to use the File menu) and no ANSI option (that I could find). Nothing I did showed up later as UTF-8 in Notepad++. If this had had just a little more development this tool would have been nearly perfect.
  • Sybuser
    Sybuser over 2 years
    My iconv command from git bash has no -o option so I use file redirection > : find . -type f -name '*.txt' -exec sh -c 'iconv -f $(file -bi "$1" |sed -e "s/.*[ ]charset=//") -t utf-8 > /tmp/converted "$1" && mv /tmp/converted "$1"' -- {} \;. Any advantage of using this syntax -- as opposed to passing {} directly ? find . -type f -name '*.txt' -exec sh -c 'iconv -f $(file -bi {} |sed -e "s/.*[ ]charset=//") -t utf-8 > /tmp/converted {} && mv /tmp/converted {}' \;