Do you need to migrate thousands of PST files, really big mailboxes or large email archives to Microsoft 365 (and is drive shipping the answer?)
What is drive shipping?
Microsoft’s drive shipping service is multi-step process via which large amounts of data can be uploaded into Microsoft cloud via an interim device which is physically shipped to a Microsoft location.
This is how it works:
Instead of transferring data across the network, the technique involves writing PST files to a hard drive along with a mapping file (for example, migrate PST <filename> to <username> primary or archive mailbox).
The (encrypted) hard drive is then physically shipped to a designated Microsoft location from where data centre personnel pre-stage the contents into Azure. The files are then ingested into Exchange Online according to the supplied mappings.
If you have TBs of emails you want to move into Microsoft 365, drive shipping via PSTs is an option open to you.
This Microsoft article goes into all the different iterations of what you need to do and the cost of using this service.
The same technique can be used to perform migration of mailboxes that are over 100GB in size or if the user’s mailbox contains one or more messages that exceed the 150-megabyte (MB) message limit (in which case resorting to PST files is recommended).
Using interim PSTs is also an option for migrating the contents from large email archives such as Enterprise Vault and EMC SourceOne, or email journals form platforms such as Mimecast.
The question is: Is drive shipping PSTs a good option for your large email archive migration?
Here are the pros and cons:
- It’s low cost…on paper*. The cost to import PST files to Microsoft 365 mailboxes using drive shipping is $2 USD per GB of data.
- It minimises impact on your network: if you have a sub-optimal network that cannot handle large amounts of data being transferred, using a data drive to ship PSTs to Microsoft negates any network concern. Even on networks that can support around 500Mbs you can experience slow performance when you start to drive a large migrations alongside regular user activity.
- It avoids the impact of Microsoft throttling:Microsoft applies throttling to avoid overloading its servers. Although you won’t experience the effect of throttling when using native Microsoft mailbox moves to migrate your mailboxes, many email archive migration solutions use the EWS protocol to move your data, and this protocol is subject to throttling, although Microsoft has made throttling easy to ease off during the course of a bulk migration.
- *It can work out expensive:At face value, $2 USD per GB is cost-effective, for example, a 20TB project would be $40,960 to ‘drive ship’, but this does not include the added overheads of getting your data onto the drives (see next point).
- PST preparation is labour-intensive. Suffice to say that manually extracting data from archives into PST files and then preparing them for upload can be super time-consuming. Native tools for extraction out of third-party archives (such as the Enterprise Vault extraction wizard) are slow and not geared up for performing automated mass exits. Once you’ve extracted files, you’ll need to make sure they are prepared properly for Microsoft. This includes the creation of a mapping file, so Microsoft knows what files(s) belong to who, and where you want them putting. Check out the steps you’ll need to carry out. Whilst it’s possible to automate the PST extraction and preparation process using third-party migration software, you’ll need to factor this additional cost in.
- It can take a long time:You’ll need to allow 7-10 days for your data to be uploaded from the drives into Azure (as we said earlier, this is where your data is pre-staged) and then Microsoft offers an ingestion rate of 24GB per day. Using our 20TB example, this means your PSTs would take 860 total days to ingest.
- It introduces an element of risk:When using multiple hops and manual interventions to move your data, there’s the potential for things to go wrong. Even though drive shipping uses Bitlocker encryption to protect your data in transit, there are many other steps that introduce the potential for human error, this includes the process of babysitting the extraction into PST files from your archive and the mapping of PST files to their owners. This, combined with the fact that extraction tools typically have no inbuilt error-checking, are unable to recover in the event of a failure, and no auditing, will make it difficult for you to prove chain-of-custody. Oh, and did I mention that PSTs as an interim file construct are prone to corruption?
- Your source data needs to be static. If you’re migrating the contents of an email archive using drive shipping via PSTs you’ll ideally need to make your archive static during the course of the migration. This means stopping any archiving activity for the duration of your archive project, otherwise you’ll have the overhead of subsequently migrating any additions to your archive. We’ve encountered several projects where stopping archiving is not possible.
- Shortcuts aren’t being addressed (and create confusion). You will need to have a game-plan for dealing with the shortcuts (also known as stubs) that typically link to archived items. Many enterprises end up migrating shortcuts along with regular emails into Exchange online mailboxes. Whilst in most cases it’s possible to retrieve the full item across the network from an on-premises archive whilst your migration is taking place, you’ll have various issues that emerge once your PSTs have been uploaded into Microsoft 365. This includes broken shortcuts (assuming at some point you will decommission your on-premises archive) and legacy shortcuts that can appear along with the full migrated item in the event of any eDiscovery exercise.
- Other limitations:
- Message Size Limits of 150MB
- No more than 300 nested folders
- Doesn’t support Public Folders
- You don’t get flexibility on where your data is migrated to destination and split of data
- Volume restrictions of up to 10 TB
- A maximum of 10 Hard drives for a single import job
So should we use drive shipping for our migration?
In summary, the only time we can see drive shipping using PSTs as being beneficial is if you have:
- Very slow network connectivity
- Lots of inactive data to migrate. For example, archives belonging to ‘leavers’
Our email archive migration service uses a series of techniques to mitigate the impact of Microsoft throttling, enabling us to move archives directly from your archive into Exchange Online (either primary mailboxes or archives) at a rate in excess of 3TB a day. There’s also no overheads or time delays involved by extracting into PSTs first.
We can also schedule migration activity to coincide with less busy times on your network.
Also, the fact that we can move your data in one step, direct from source to target avoids the non-compliance risk of interim storage and human error.
We can also help you avoid moving everything. For example, by applying date ranges.
You can also avoid creating a storage overhead in the cloud by managing where data gets migrated to. I.e., by moving messages over a certain age into archive mailboxes or moving PSTs belonging to leavers into a separate (but indexed) Azure-based store.
On a final note, using interim PSTs is also an option when migrating journals from services such as Mimecast and Proofpoint, but there are a few things to watch out for when migrating into Microsoft 365. You can find out more about migrating journals to Microsoft 365 in this article.