To help you use QIOS SqlBulkCopy, we are going to move a database to Azure step-by-step. The source is an on-premise SQL Server 2008 database. The destination is an Azure SQL V12 Database.


QIOS SqlBulkCopy is not intended to be a single-solution application. We didn't recreate everything you already have in other tools or applications. See it as an addition to your toolbox. To complete this tutorial, you need the following:

  • Visual Studio 2015
  • SQL Server Data Tools
  • Azure subscription (optional). We are going to publish from an on-premise server to Azure. However, if you just want to move your database to another on-premise server, you can do that. Instead of using Azure as a destination, just use your alternate destination.

Create the schema

The first thing you need is a Visual Studio Database schema that we publish in Azure. This schema represents all the tables, keys, views, procedures and constraints of the database. So it contains everything but the data. We can generate this schema from existing databases with Visual Studio combined with SQL Server Data Tools. When we have the schema, we can publish it to Azure. Take the following actions to generate the schema:

  1. In Visual Studio 2015, open View > SQL Server Object Explorer. You don't need to have a solution loaded for this. Be sure to use SQL Server Object Explorer. Not Server Explorer or Cloud Explorer, those don't have the required functionality.
  2. Add a connection to your source server, if you don't have it already.
  3. Right-click the database you want to migrate in SQL Server Object Explorer and click Create New Project.
  4. Specify your settings and complete the wizard.
  5. Now you have generated the complete schema of your database. You might need to modify the schema before you can publish it.
  6. Right-click the project in Solution Explorer and open the properties.
  7. Adjust your project settings so it matches Microsoft Azure SQL Database V12. (Or your destination server-type).
  8. Try to build the database project. If it is not compatible with the target-schema, modify the schema until it is compatible.

Publish the schema

After you have created the schema, you are going to publish it on your destination platform in an empty database.

  1. On your destination server or platform, make an empty database. You can do this in Azure Portal. If you move your database to another (regular) SQL Server instead of Azure, this step is not required.
  2. In Visual Studio, right-click your Northwind-test database project in Solution Explorer and select Publish.
  3. Specify the information needed to connect to the destination server and click Generate Script.
  4. Now a script will be generated to publish your database schema on Azure.
  5. Try to publish the script by executing it.
  6. Solve possible errors that occur in the source scripts (not in the publish script) and regenerate and execute the publish script.
  7. When the publication is successful you have an empty copy of your source database on your destination server.

Bulk copy the data

Now we are going to use QIOS SqlBulkCopy to copy the source database to the (empty) destination database. You probably want to do a dry test run. When you copy the actual production database, you should take your applications that write to the source database offline. Otherwise data might get lost or you will get constraints that cannot be reapplied. To minimize your down-time you can invent some 'smart' copy scheme. You decide in which order tables are getting copied, but that is up to you to.

  1. Start QIOS SqlBulkCopy
  2. On the left, fill out the connection string to your source database and click Connect.
  3. When you are connected, click the Get Tables button to retrieve all the tables from the source database. Those tables will be added in the Tables to do list.
  4. On the right, fill out the connection string to your target database and click Connect.
  5. You can click the Remove not existing tables above the Tables to do list. This will remove all the tables from the Tables to do list that don't exist in the target database. Since we made an exact copy via schema publishing, this will not do anything in our case.
  6. You can modify the Tables to do list. You can change order, or remove tables that you don't want to copy.
  7. When you are ready, press Copy. All the tables in the Tables to do list will be copied to the target database.
  8. You see the progress, the batch speed and total speed in the Output window. During the process data is selected from the source tables. These rows are inserted into the target table via SQL bulk-insert.
  9. The amount of rows that are written in one bulk-insert can be configured in the Batch size textbox. You can modify this Batch size to increase performance. This depends on your row size, network-speed, etc.
  10. When a table is copied, it is moved to the Completed tables textbox. If you stop and start the process, all the tables in the Completed tables are not copied again. So besides giving you the information of what is done, this textbox also has an operational functionality. This Tables to do minus Completed tables work-set is also used when truncating tables or deleting rows.
  11. The SQL Server bulk-insert functionality that is used disables foreign keys before it starts copying. However, it doesn't automatically re-enable them. When the copy process is finished, you get asked to re-enable the constraints.
  12. You are done.

Delete, truncate and stop

In the tutorial above you have copied your database in one straight action. However, that is not how it normally works out. You never just copy your production database, you always do a dry test run and always something goes wrong.

Delete the contents of the target tables

When your test run completes, and you are ready to copy your production database, you probably don't want to start all over. With the Tables> Delete contents button you can delete all the contents of your target tables. Before you can successfully do this, you must disable your foreign keys. You can do that with Foreign Keys> Disable.

The tables whose contents gets deleted are the Tables to do minus the Completed tables.

Deleting (with T-SQL DELETE) the contents of your tables causes your log file to grow. Each row deleted is written to the transaction log. For small databases this is usually no problem. For large databases this can become an issue in both time and database log space consumed.

Truncating the target tables

A much less resource intensive method to clear your target database is truncating the tables. This will just empty the tables in a swift action without growing your log much. This is the recommended way to clear your tables. You can do that with Tables> Truncate.

However, the disadvantage of this method is that you cannot have any foreign keys referencing the target tables. Disabling them is not enough, they cannot exist. Therefore we have created the Foreign Keys> Generate create script and Foreign Keys > drop buttons. The first generates a script with which you can recreate the foreign keys. The latter drops the foreign keys.

The tables that get truncated are the Tables to do minus the Completed tables.

After the tables are truncated you can execute the create-script with SQL Server Management Studio or Visual Studio, whatever you feel like.

Another possibility to fix your destination database is using the SQL Server Data Tools in Visual Studio. Just regenerate the publish script (you created before) and execute it. This will recreate the dropped foreign keys. This method is also the way to upgrade your database when functionality changes, but that is beyond the scope of this document.

Stop and continue

When the copy process is running, you can always click Stop. All the tables that are fully copied, are in the Completed tables textbox. When you start the process again, those will be skipped.

The table that was being copied when you clicked the Stop button, now has a row number next to it. The copy process will skip those rows when it continues. When selecting the rows from the source table, the rows are ordered on the primary key. So in the case of the screenshot, the first 847 rows are skipped and row number 848, ordered by the primary key of the Customer table (CustomerID), will be the first row copied.