Python pyodbc connections to IBM Netezza Erroring

11,653

Solution 1

Turns out pyodbc can't gracefully convert all of Netezza's types. The table I was working with had two that are problematic:

  • A column of type NUMERIC(7,2)
  • A column of type NVARCHAR(255)

The NUMERIC column causes a decimal conversion error on NULL. The NVARCHAR column returns a utf-16-le encoded string, which is a pain in the ass.

I haven't found a good driver-or-wrapper-level solution yet. This can be hacked by casting types in the SQL statement:

SELECT
     foo::FLOAT AS was_numeric
     , bar::VARCHAR(255) AS was_nvarchar

I'll post here if I find a lower-level answer.

Solution 2

I've just encounter the same problem and found a different solution. I managed to solve the issue by:

  1. Making sure the following attributes are part of my driver options in odbc ini file:

    • UnicodeTranslationOption = utf16
    • CharacterTranslationOption = all
  2. Add the following environment variables:

    • LD_LIBRARY_PATH=$LD_LIBRARY_PATH:[NETEZZA_LIB_FILES_PATH]
    • ODBCINI=[ODBC_INI_FULL_PATH]
    • NZ_ODBC_INI_PATH=[ODBC_INI_FOLDER_PATH]

    In my case the values are:

    • LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/nz/lib
    • ODBC_INI=/etc/odbc.ini
    • NZ_ODBC_INI_PATH=/etc

I'm using centos 6 and also installed both 'unixODBC' and 'unixODBC-devel' packages.

Hope it helps someone.

Solution 3

I'm not sure what your error is, but the code below is allowing me to connect to Netezza via ODBC:

# Connect via pyodbc to listed data sources on machine
import pyodbc
print pyodbc.dataSources()

print "Connecting via ODBC"

# get a connection, if a connect cannot be made an exception will be raised here
conn = pyodbc.connect("DRIVER={NetezzaSQL};SERVER=<myserver>;PORT=<myport>;DATABASE=<mydbschema>;UID=<user>;PWD=<password>;")

print "Connected!\n"

# you can then use conn cursor to perform queries

Solution 4

The Netezza Linux client package includes /usr/local/nz/lib/ODBC_README, which lists all the values for those two attributes:

UnicodeTranslationOption:
    Specify translation option for Unicode.
    Possible value strings are:
        utf8    : unicode data is in utf-8 encoding
        utf16   : unicode data is in utf-16 encoding
        utf32   : unicode data is in utf-32 encoding
    Do not add '-' in the value strings listed above, e.g. "utf-8" is not 
    a valid string value. These value strings are case insensitive.

    On windows this option is not available as windows DM always passes 
    unicode data in utf16.

CharacterTranslationOption ("Optimize for ASCII character set" on Windows):
    Specify translation option for character encodings.
    Possible value strings are:
        all     : Support all character encodings
        latin9  : Support Latin9 character encoding only
    Do not add '-' in the value strings listed above, e.g. "latin-9"
    is not a valid string value. These value strings are case
    insensitive.

    NPS uses the Latin9 character encoding for char and varchar
    datatypes. The character encoding on many Windows systems
    is similar, but not identical to this. For the ASCII subset
    (letters a-z, A-Z, numbers 0-9 and punctuation marks) they are
    identical. If your character data in CHAR or VARCHAR datatypes is
    only in this ASCII subset then it will be faster if this box is
    checked. If your data has special characters such as the Euro
    sign (€) then keep the box unchecked to accurately convert
    the encoding. Characters in the NCHAR or NVARCHAR data types
    will always be converted appropriately.
Share:
11,653
Gastove
Author by

Gastove

Updated on June 25, 2022

Comments

  • Gastove
    Gastove almost 2 years

    So. This issue is almost exactly the same as the one discussed here -- but the fix (such as it is) discussed in that post doesn't fix things for me.

    I'm trying to use Python 2.7.5 and pyodbc 3.0.7 to connect from an Ubuntu 12.04 64bit machine to an IBM Netezza Database. I'm using unixODBC to handle specifying a DSN. This DSN works beautifully from the isql CLI -- so I know it's configured correctly, and unixODBC is ticking right along.

    The code is currently dead simple, and easy to reproduce in a REPL:

    In [1]: import pyodbc
    In [2]: conn = pyodbc.connect(dsn='NZSQL')
    In [3]: curs = conn.cursor()
    In [4]: curs.execute("SELECT * FROM DB..FOO ORDER BY created_on DESC LIMIT 10")
    Out[4]: <pyodbc.Cursor at 0x1a70ab0>
    
    In [5]: curs.fetchall()
    ---------------------------------------------------------------------------
    InvalidOperation                          Traceback (most recent call last)
    <ipython-input-5-ad813e4432e9> in <module>()
    ----> 1 curs.fetchall()
    
    /usr/lib/python2.7/decimal.pyc in __new__(cls, value, context)
        546                     context = getcontext()
        547                 return context._raise_error(ConversionSyntax,
    --> 548                                 "Invalid literal for Decimal: %r" % value)
        549
        550             if m.group('sign') == "-":
    
    /usr/lib/python2.7/decimal.pyc in _raise_error(self, condition, explanation, *args)
       3864         # Errors should only be risked on copies of the context
       3865         # self._ignored_flags = []
    -> 3866         raise error(explanation)
       3867
       3868     def _ignore_all_flags(self):
    
    InvalidOperation: Invalid literal for Decimal: u''
    

    So I get a connection, the query returns correctly, and then when I try to get a row... asplode.

    Anybody ever managed to do this?

  • mlevit
    mlevit about 10 years
    have you found a different solution to this issue? I've just encountered this problem.
  • Gastove
    Gastove about 10 years
    No, I'm afraid I haven't yet. It seems that this is a problem that would need to be corrected in pyodbc itself, and I haven't had that much time on my hands :-/
  • Leonid
    Leonid over 8 years
    What do both of these do: UnicodeTranslationOption = utf16 CharacterTranslationOption = all?
  • Koby
    Koby over 8 years
  • Leonid
    Leonid over 8 years
    Thank you, that was useful, but 2 more things: this information could be lost due to a link rot. Also, one may still have a question - what are the valid options for the second value other than all?
  • Koby
    Koby over 8 years
    Unfortunately I don't know. In my case those are the values I had to set so it will work. Other values didn't work. Never found another value other than all.
  • Paul
    Paul over 7 years
    I had the same error on SQL server (bigint) and my casting of the bad column (foo) looked more like this select cast(foo as FLOAT) foo from table_name
  • Petro
    Petro over 6 years
    @Koby that link doesn't seem to wreck anymore. Found this ibm.com/support/knowledgecenter/SSZJPZ_8.7.0/…