When given a comma or tab-delimited file, we usually want to import this into some kind of database. The first you need to find out is what type of file this is as it could make or break your import.
|Mac OS Pre-X
||Control Key: ^M
|Mac X / Unix
||Control Key: ^J
This could easily be done at the command line with:
# my-text-file.txt: ASCII text, with very long lines, with CRLF line terminators
Doesn’t get much clearer than that. So this particular file is from a Windows system with CRLF terminators. Now, without doing this, it’s a bit vague what we’re importing from. For example, if we assumed the text file was from a OSX/Unix system and ran this:
LOAD DATA LOCAL INFILE "" INTO TABLE cool_table FIELDS TERMINATED BY '\t' OPTIONALLY ENCLOSED BY '"' LINES TERMINATED BY '\n';
You’ll get some funky carriage return characters at the beginning of each row (and they will count as characters and truncate your data). Now using vim editor, you won’t see much wrong with the file with the “:set list” command. You’ll see ^I where the tabs are and a $ for end-of-line. All looks well but that’s because vim auto-detected the file. You could see this with the vim command:
:set ff? ffs?
Now to see its true colors, let’s force it to read as unix/DOS.
You’ll start to see the infamous ^M in the file.
In conclusion, find out exactly what kind of file you’re importing, and let the importer know. In the case with MySQL, terminate lines by “\r\n” will provide a proper import. I hope this solves the mystery of imports that go wrong!
While creating a schema for SNOMED to ICD-10 mapping, I came across the requirement of storing a UUID (128-bit unsigned integer) as the ‘id’ field. 128-bits = 16 bytes so we’ll use BINARY(16).
drop table if exists `snomedct_to_icd10_refset_descriptor_US_20140901`;
create table `snomedct_to_icd10_refset_descriptor_US_20140901` (
id binary(16) not null primary key,
effectiveTime DATE not null,
active tinyint(1) unsigned not null default 0,
moduleId bigint unsigned not null default 0,
refSetId bigint unsigned not null default 0,
referencedComponentId bigint unsigned not null default 0,
attributeDescription bigint not null default 0,
attributeType bigint unsigned not null default 0,
attributeOrder int unsigned null
In order to store the UUID properly, we need to remove the dashes and convert the hexadecimal representation to its binary form. This could be done in a single step with: UNHEX(REPLACE(‘id’, ‘-‘, ”)). In my case, I’m loading it from the a file so it’s done this way:
load data infile '/tmp/der2_cciRefset_RefsetDescriptorSnapshot_US1000124_20140901.txt'
into table `snomedct_to_icd10_refset_descriptor_US_20140901`
fields terminated by '\t'
lines terminated by '\n'
ignore 1 lines
(@var1, effectiveTime, active, moduleId, refSetId, referencedComponentId, attributeDescription, attributeType, attributeOrder)
set id = unhex(replace(@var1, '-', ''));
Of course when retrieving the ‘id’ field, you’ll need to HEX(‘id’).
Let me guess, you’re trying to setup MySQL replication and you end up with errors like the following:
- ERROR 2026 (HY000): SSL connection error: protocol version mismatch
- ERROR 2026 (HY000): SSL connection error: ASN: bad other signature confirmation
- Mismatch is usually because you’re trying to authentication with your client certificates. Using the –ssl-ca flag is sufficient.
mysql -utransmed_app -p --ssl-ca=/etc/mysql-ssl/chain-cert.cer -h dest.example.com
You MUST use a chain cert.
- ERROR 2003 (HY000): Can’t connect to MySQL server on ‘example.com’ (111)
This example was done with Percona Server 5.6 on Ubuntu 14.04 LTS with Comodo Certificates.
Some MySQL selections don’t support the PKCS#8 format.
-----BEGIN PRIVATE KEY-----
This occurs when keys are generated with OpenSSL 1.0+. To fix this issue you simply convert the key to PKCS#1 format:
openssl rsa -in pkcs8-key.pem -out pkcs1-key.pem
You should now see:
-----BEGIN RSA PRIVATE KEY-----
Keep in mind you can’t just simply insert “RSA” into the PKCS#8 format. It won’t work! They’re different formats altogether. You can verify the certs/keys:
openssl verify -CAfile ca-cert.pem server-cert.pem client-cert.pem
Additional troubleshooting tips:
- Make sure both servers have SSL enabled. Make sure the master_ssl_ca has the entire CA chain or it won’t work!
ssl-ca = /etc/mysql-ssl/chain-cert.pem
ssl-cert = /etc/mysql-ssl/STAR_example_net.pem
ssl-key = /etc/mysql-ssl/wildcard-cert.pem
mysql> show variables like "%ssl%";
| Variable_name | Value |
| have_openssl | YES |
| have_ssl | YES |
| ssl_ca | /etc/mysql-ssl/COMODO-chained.pem |
| ssl_capath | |
| ssl_cert | /etc/mysql-ssl/STAR_example_net.pem |
| ssl_cipher | |
| ssl_crl | |
| ssl_crlpath | |
| ssl_key | /etc/mysql-ssl/wildcard-cert.pem |
- If you run into this error: “Slave failed to initialize relay log info structure from the repository” you just need to run “RESET SLAVE;”
- Make sure your firewalls have Port 3306 (or whatever port you’re using) open.
- Make sure secure_auth is on:
show variables like "secure_auth";
| Variable_name | Value |
| secure_auth | ON |
- Make sure you’re granting the correct permissions:
GRANT REPLICATION SLAVE ON *.* TO email@example.com IDENTIFIED BY 'SecretPassw0rd' REQIURE SSL;
- You should have master_ssl set to 1:
change master to
Getting the following error on your mac v10.10?
Run the following command in the Terminal / command line to fix the issue. This allows your program to find the library dependencies in the lib path.
sudo ln -s /usr/local/mysql/lib/libmysqlclient.18.dylib /usr/lib/libmysqlclient.18.dylib
Getting Date Exception when data has null or zero values in record? Modify your connection database url as follows to fix the error
If you get an error that says the auto-generated index name is too long (limited to 64 characters) during a rails migration, add a name to it in the last argument to overwrite the generated name with your custom shorter name.
Sample Error Message:
== AddColumnNameToPrices: reverting ============================================
— remove_column(:prices, :column_name)
An error has occurred, this and all later migrations canceled:
Index name ‘temp_index_altered_prices_on_column_and_other_column_and_third_column’ on table ‘altered_prices’ is too long; the limit is 64 characters
add_index :designations, [ :scope_type, :scope_id, :role_id, :user_id ], :unique => true, :name => ‘my_index’
I’ve gotten this error on both ubuntu and mac osx.
Can’t connect to local MySQL server through socket ‘/var/run/mysqld/mysqld.sock’ (2)
Simple quick fix solution is to create a symbolic link to it. The location i’ve seen it other than /var/run/mysqld is /tmp/mysql.sock
sudo ln -s /tmp/mysql.sock /var/run/mysqld/mysqld.sock
So I’ve ran into an emergency case where I had to restore a single table from a 40GB backup created with mysqldump (A logical backup). Restoring the whole thing would take more than two hours! Let’s use “table1” as the table I need to restore into “database1” from a mylargedump.sql dump file.
grep 'INSERT INTO <code>table1’ mylargedump.sql | mysql database1
That’s it! The insert statements for this specific table will pipe directly into MySQL.
How do you dump data from a mysql database table without the drop and create tables showing up before the inserts?
The following solution will just give you a list of insert statements. Simply use the options –skip-triggers –compact –no-create-info.
Here is an example.
mysqldump --skip-triggers --compact --no-create-info -u <username> -p <password> <database> <tables>