Category Archives: Database

Vim – File Formats, Line Feed, and Carriage Return

When given a comma or tab-delimited file, we usually want to import this into some kind of database. The first you need to find out is what type of file this is as it could make or break your import.

Mac OS Pre-X CR ASCII 13 Control Key: ^M
Mac X / Unix LF ASCII 10 Control Key: ^J
Windows CRLF N/A ^M$

This could easily be done at the command line with:

Doesn’t get much clearer than that. So this particular file is from a Windows system with CRLF terminators. Now, without doing this, it’s a bit vague what we’re importing from. For example, if we assumed the text file was from a OSX/Unix system and ran this:

You’ll get some funky carriage return characters at the beginning of each row (and they will count as characters and truncate your data). Now using vim editor, you won’t see much wrong with the file with the “:set list” command. You’ll see ^I where the tabs are and a $ for end-of-line. All looks well but that’s because vim auto-detected the file. You could see this with the vim command:

Now to see its true colors, let’s force it to read as unix/DOS.

You’ll start to see the infamous ^M in the file.

In conclusion, find out exactly what kind of file you’re importing, and let the importer know. In the case with MySQL, terminate lines by “\r\n” will provide a proper import. I hope this solves the mystery of imports that go wrong!


What is the Proper Data Type for Storing UUIDs in MySQL?

While creating a schema for SNOMED to ICD-10 mapping, I came across the requirement of storing a UUID (128-bit unsigned integer) as the ‘id’ field. 128-bits = 16 bytes so we’ll use BINARY(16).

In order to store the UUID properly, we need to remove the dashes and convert the hexadecimal representation to its binary form. This could be done in a single step with: UNHEX(REPLACE(‘id’, ‘-‘, ”)). In my case, I’m loading it from the a file so it’s done this way:

Of course when retrieving the ‘id’ field, you’ll need to HEX(‘id’).

Troubleshooting MySQL SSL Replication Problems with Ubuntu 14.04 LTS

Let me guess, you’re trying to setup MySQL replication and you end up with errors like the following:

  • ERROR 2026 (HY000): SSL connection error: protocol version mismatch
  • ERROR 2026 (HY000): SSL connection error: ASN: bad other signature confirmation
    • Mismatch is usually because you’re trying to authentication with your client certificates. Using the –ssl-ca flag is sufficient.

      You MUST use a chain cert.
  • ERROR 2003 (HY000): Can’t connect to MySQL server on ‘’ (111)

This example was done with Percona Server 5.6 on Ubuntu 14.04 LTS with Comodo Certificates.

Some MySQL selections don’t support the PKCS#8 format.

This occurs when keys are generated with OpenSSL 1.0+. To fix this issue you simply convert the key to PKCS#1 format:

You should now see:

Keep in mind you can’t just simply insert “RSA” into the PKCS#8 format. It won’t work! They’re different formats altogether. You can verify the certs/keys:

Additional troubleshooting tips:

  • Make sure both servers have SSL enabled. Make sure the master_ssl_ca has the entire CA chain or it won’t work!

  • If you run into this error: “Slave failed to initialize relay log info structure from the repository” you just need to run “RESET SLAVE;”
  • Make sure your firewalls have Port 3306 (or whatever port you’re using) open.
  • Make sure secure_auth is on:
  • Make sure you’re granting the correct permissions:
  • You should have master_ssl set to 1:

index name too long in rails 3.2 and mysql 5.5

If you get an error that says the auto-generated index name is too long (limited to 64 characters) during a rails migration, add a name to it in the last argument to overwrite the generated name with your custom shorter name.

Sample Error Message:

== AddColumnNameToPrices: reverting ============================================
— remove_column(:prices, :column_name)
rake aborted!
An error has occurred, this and all later migrations canceled:

Index name ‘temp_index_altered_prices_on_column_and_other_column_and_third_column’ on table ‘altered_prices’ is too long; the limit is 64 characters

add_index :designations, [ :scope_type, :scope_id, :role_id, :user_id ], :unique => true, :name => ‘my_index’

Can’t connect to local MySQL server through socket ‘/var/run/mysqld/mysqld.sock’ (2)

I’ve gotten this error on both ubuntu and mac osx.

Can’t connect to local MySQL server through socket ‘/var/run/mysqld/mysqld.sock’ (2)

Simple quick fix solution is to create a symbolic link to it. The location i’ve seen it other than /var/run/mysqld is /tmp/mysql.sock

sudo ln -s /tmp/mysql.sock /var/run/mysqld/mysqld.sock

MySQL: How to Restore Specific Tables From a Large SQL Dump File

So I’ve ran into an emergency case where I had to restore a single table from a 40GB backup created with mysqldump (A logical backup).  Restoring the whole thing would take more than two hours! Let’s use “table1” as the table I need to restore into “database1” from a mylargedump.sql dump file.

grep 'INSERT INTO <code>table1’ mylargedump.sql | mysql database1

That’s it! The insert statements for this specific table will pipe directly into MySQL.

Excluding DROP and CREATE TABLE From mysqldump

How do you dump data from a mysql database table without the drop and create tables showing up before the inserts?

The following solution will just give you a list of insert statements. Simply use the options –skip-triggers –compact –no-create-info.
Here is an example.