Yes, Virginia. You can upgrade to the latest Exchange Cumulative Update – even if you aren’t keeping up on those .Net versions!

Let me just start out by saying you really need to be keeping your Exchange Servers current. It’s not just about support, but the additional fixes and, most importantly, security updates. (And If you are running in hybrid mode, it’s an absolute requirement to be at the latest or previous version.)

So, do yourself a favor and upgrade as soon as possible after a new release and you won’t run into the dreaded disappearing CU problem . 

So what to do if that “bridge” CU is missing? Well, for the last couple of years, the advice is to call Microsoft Support, get the CUs that are no longer available publicly and step your way through the CUs and .net upgrades.

I would still recommend that option for the most part. It’s the tested and proven way to upgrade.

There is another recent choice however that you may have missed. Upgrading directly to the latest supported .Net version, then the latest CU. I’m not making this up!

Check it out:

When upgrading Exchange from an unsupported CU to the current CU and no intermediate CUs are available, you should upgrade to the latest version of .NET that’s supported by Exchange first and then immediately upgrade to the current CU. This method doesn’t replace the need to keep your Exchange servers up to date and on the latest, supported, CU.
Microsoft makes no claim that an upgrade failure will not occur using this method, which may result in the need to contact Microsoft Support Services.

For example:

You are on 2013 CU9. You want to upgrade to CU19. I would recommend upgrading to CU15, then .net 4.6.2 – then CU19, then .net 4.7.1 following this article.

However: If that isn’t possible for some reason, you could go straight to .net 4.7.1, then install CU19.

Note that caveat again: Microsoft makes no claim that an upgrade failure will not occur using this method, which may result in the need to contact Microsoft Support Services.

What that is saying is that Microsoft will assist you if this fails, however it’s not a tested scenario.

So bottom line: Keep your Exchange Servers upgraded and happy in a timely manner. But if that is not possible for some reason, you do have another option- one that Microsoft support will now assist you with in the event it goes sour.


Download that new Cumulative Update for Exchange…While you can.

No the messaging world is not ending, but things have changed tremendously in the last few years around the Exchange upgrade cadence. The expectation is that you are keeping your servers up to date – applying the latest CU every quarter.  There are obviously good reasons to do so – namely new features and security fixes.  Having said that, you may be surprised to learn that unless you are in hybrid mode, you will still be supported running an out-of-date CU – with some caveats.

But here’s the rub: Exchange relies heavily on the .Net Framework and deep down in places you don’t talk about at parties; you want those .Net updates, you *need* those .Net updates. It’s here where the lack of keeping current will bite you in the rear.

When a new CU is released, the third previous CU from 9 month ago will be removed from public download.  If that update was required to support an upgrade of .Net, then you are in a pickle.

As you can see from the Exchange Supportability Matrix, and Michel’s blog post, there have been a number of these “Bridge” CUs that are required as you make the leap to the next .Net version.  There are a few moving pieces here, but typically you will be given at least 6 months notice that a future CU will require a specific .Net version that is optional now and can be installed on the current CU.

So for example, Exchange 2013 CU 19 supports .Net Framework 4.7.1. You want to upgrade to that version after you apply CU19. Now you are well positioned for the future CU that requires 4.7.1.

Expect this to be the new norm:

  1. Advance notice of new .Net requirement in a future CU.
  2. Support for that .Net version in a CU when it’s released.
  3. Optional upgrade to that .Net version after applying the newly released CU.
  4. Required upgrade to that .Net version before you can apply the CU that was part of the original notice.

If you aren’t convinced by now that keeping up with the Exchange Cumulative Updates is in your best interest, I would offer that you at least download them as they are released – even if you never plan to actually install it. That way you can easily access that “Bridge” CU and avoid a call to Microsoft Support, since that will be the only way to get them once they are pulled from the public download.




This message wasn’t delivered to anyone because there are too many recipients. The limit is 100. More hybrid fun!

If you are living the Exchange Online Hybrid dream like so many, you often run into oddities around message flow.  One encountered recently was a nagging NDR an Exchange Online user would get when sending to an old-school Distribution List.


This message wasn’t delivered to anyone because there are too many recipients. The limit is 100. This message has 198 recipients.


Action: failed
Status: 5.5.3
X-Supplementary-Info: < #5.5.3 smtp;550 5.5.3 RESOLVER.ADR.RecipLimit; too
many recipients>

Now the first thought is some Exchange Online limitation, but the NDR was actually generated by an on-premises server and, as we all know, DLs count as one recipient right? Besides, I wasn’t aware of any limit anywhere in Exchange of 100 – except the pickup directory for the transport service.

Regardless, due diligence required some checking and the sender ( who was part of the list) had the standard recipient limits in place:

And if you didn’t already know, 500 is the default in Exchange Online and that can not be modified.

The other members of the group were all on-prem, so I checked their recipient limits and, as expected, those were set to the default of “unlimited”.

Hmm, Did someone mess with the on-prem org limits? Didn’t make sense since this was the only user reporting this. But being the good scout, I checked anyway and that value was set much higher than 100.

So where the heck was that coming from?

Checked receive connector limits, looked through SMTP protocol logs. No smoking gun.

Then it occurred to me, was it possible that the remote mailbox for the 365 sender was the culprit?

Sure enough, there it was:

Set-Remotemailbox doesn’t let you change that attribute, so I cleared it with ADUC and the birds sang once again.

Reconstructing this, I can only assume that the on-prem mailbox had this value set for some reason and during the move to Exchange Online, it was preserved for the remote mailbox.

The interesting piece to remember is that the DL was expanded in Exchange Online and because majority of the list members were still on-prem, the recipient limits came into play on the on-prem object.

Good times!




Outlook ate my Attachments!

Fixed! If you are seeing this issue, close and re-open Outlook! 

Kinda stumbled upon this the other day when sending a text file through Outlook. The recipient informed me it wasn’t received. Oh jeez, did I just think I sent that?

So I sent it again and even included a pic of it attached to ease my embarrassment. Nope, the recipient was still not getting it- just the email itself.

I checked my sent items and sure enough, no paper clip icon. So I ran a bunch of tests sending different sized text files,  and as you can see from the results (with my clever subject lines – sigh), some went through, some did not.

Message tracking in Office 365 revealed nothing unusual.

I wondered if it was just me, so I did a little searching and found someone reporting the same issue in the TechEd Forums:

At this point, it appears to affect only text attachments over 4MB using Microsoft Office 365 Pro Plus C2R Outlook  8431.2079 and above, but not Outlook on the Web ( OWA) or whatever it’s called now. Your experience may be different.

As you can see, I’m using the latest and greatest version!

I have also reported this issue, so hopefully it will get fixed soon.

In the meantime, compress that text attachment to below 4MB, upload to OneDrive or a shared drive.

Safe Senders, Spoofing and Office 365. They really can be friends!

One of the rather interesting side effects of moving your mailbox to Exchange Online is the change in behavior of the old trusty Safe Sender list. As Terry points in this blog post from last year, if your mail client trusts only messages sent from a safe sender, all other messages will end up in junk mail. This is change from on-premises – where only messages marked as junk will be marked as SPAM. All others – including the trusted senders- will arrive in the inbox.

For the most part, this is not a big deal, simply inform your end-users of this change once their mailbox has been migrated and let them decide how to handle it; Keep using Safe Senders and whitelist any legitimate senders, or disable it and use the standard junk mail settings in the client.

There are specific scenarios where this could be problematic however. Many organizations have developed internal processes that send reports, alerts and updates anonymously from on-premises systems to their workforce. It’s very common to have dozens to hundreds of these processes, enabled over many years – each sending as an arbitrary SMTP address – essentially spoofing as an authoritative domain.  As I discovered, it’s not as easy as it sounds to ask end-users – especially executives (who rely heavily on the Safe Sender option) to whitelist numerous addresses when it wasn’t required in the past.

Ah, the solution is easy. Just add your authoritative domain to Safe Senders. That will cover you for everything. Not so fast!

One, you can’t add an authoritative domain to the trusted list.

Two, Exchange Online doesn’t honor whitelisted domains anyway.

One possible solution that is the least disruptive to the end-user: Trust those internal processes at the Exchange server level.

Example: Assume you are in hybrid mode and still have an Exchange Server on-prem. Create a receive connector on the Exchange Server. Scope the remote IP addresses to the internal SMTP servers that send these messages to end-users, then check the box ( or use powershell) to set the receive connector you just created as “Externally Secure”.

The receive connector auth and permissions will now look like this:

AuthMechanism           : Tls, ExternalAuthoritative
PermissionGroups        : AnonymousUsers, ExchangeServers


What you see in the headers of a received message:

X-MS-Exchange-Organization-AuthAs: Internal

X-MS-Exchange-Organization-AuthMechanism: 10

In the end, all messages that pass through this connector ( and eventually through the hybrid connector to Office 365) will be considered authenticated and will not be sent to junk mail – even if the sender is not in the Safe Sender List. Boom!

P.S. This is only an example. Do not enable this option if you do not trust or have control of the sending servers.




Abs of Steel and My DKIM Body Hash Won’t Verify. Help Me Dr. Phil!

If you are applying an *inbound* disclaimer with a mail flow rule in Office 365, you may be surprised to see a DKIM body hash failure in the header of the message. ( and if you have never noticed this, well, that’s understandable!)



Message sent from Gmail with disclaimer rule:


Authentication-Results spf=pass (sender IP is;; dkim=pass (signature was verified);; dmarc=pass action=none;; dkim=fail (body hash did not verify);





Message sent from Gmail w/o disclaimer rule:


Authentication-Results spf=pass (sender IP is;; dkim=pass (signature was verified);; dmarc=pass action=none;; dkim=pass (signature was verified);


This brings up some interesting questions. Is the DKIM check *after* mail flow rules are processed? And does this mean I have lost the ability to check for DKIM failures for those messages?

Thankfully, no on both. I have confirmed that you can ignore the failure and have been assured that mail flow rules are evaluated after DKIM verification.

As you can see in my examples, in both cases, the first DKIM check passes and just as importantly, DMARC passes and that is what you should be hanging your hat on.

Stay safe out there!

Designing My Office 365 Tenant for On-Premises

With the new Exchange Online storage limits, I got to thinking what that would look if I were to do this on-prem.

Let’s see, 2GB mailboxes to start… 100GB max sixe – no archive ( Why would I need an archive?)…16,000 mailboxes. Follow the Preferred Architecture otherwise: DAG, 4 copies of the databases, one lagged, JBOD, etc…

Run that through the latest mailbox calculator and wait for the big reveal.


Twelve Mailboxes per Database!


144 Databases in the DAG. I can handle that. Yes, 12 mailboxes in each. Got it.

Much less RAM than you might think. I have never considered using a “Lagged Copy Server” though.

Ouch. These giant mailboxes will need a bunch of storage! Not sure I am comfortable with databases that size. I am beginning to lose my nerve.

I think I will take the third option and leave the 100GB mailboxes to the Exchange Online professionals. You know who you are.


Auto-Forwarding is not dead. It’s very happy!

Auto-Forwarding from Exchange. Now there’s a subject that has been beat to death. We all know how it works and it has certainly been documented enough don’t you think?

Posts from Tim and others are worthy reads. My goal here is to simply put all of this together and maybe point out some things you may not know.

There have traditionally been two options to forward from Exchange: Outlook Rules or Administrator-enabled forwarding.

  1. Outlook Rules: End users have 3 options as illustrated in the image below. Of course, only one can be chosen, but I have checked all three to point them out. Additionally, you can select an existing user from the GAL or contacts or enter a SMTP address ad-hoc.  Regardless of which option in the rule is selected, the messages will be delivered to the user’s inbox.




2. Administrator enabled Auto-Forwarding:

This is done using EAC or Exchange Powershell.

In EAC, under Delivery options for the Mailbox.

Note here that the recipient has to exist in the Address Book.

With Powershell and set-mailbox, you have additional options:

The ForwardingAddress parameter specifies a forwarding address for messages that are sent to this mailbox. A valid value for this parameter is a recipient in your organization. You can use any value that uniquely identifies the recipient.

The ForwardingSmtpAddress parameter specifies a forwarding SMTP address for messages that are sent to this mailbox. Typically, you use this parameter to specify external email addresses that aren’t validated.

Set-Mailbox -Identity "John Woods" -DeliverToMailboxAndForward $true -ForwardingSMTPAddress
 Note the important difference and the ability to include *or* exclude delivery to the mailbox.

How messages are delivered and forwarded is controlled by the DeliverToMailboxAndForward parameter.

  • DeliverToMailboxAndForward is $true   Messages are delivered to this mailbox and forwarded to the specified recipient.
  • DeliverToMailboxAndForward is $false   Messages are only forwarded to the specified recipient. Messages aren’t delivered to this mailbox.

Office 365 Outlook Web Access adds an additional wrinkle here. End-users have access to a trimmed down version of the administrator forwarding option.

This is not an Outlook rule, but similar to :
Set-Mailbox -Identity “John Woods” -DeliverToMailboxAndForward $true -ForwardingSMTPAddress

 Of course, Office 365 users can still create rules for forward in Outlook as well.


Why you may not want to forward to external recipients

  1. Data leaks, typically unnoticed, to mailboxes you do not control.
  2. A forwarding rule could create a mail loop between your org and another.
  3. Forwarded messages could land your sending IP addresses on block lists.
  4. Forwarding could bypass your data retention requirements. *


Things you may not know about forwarding

*Outlook forwarding rules allow the message to bypass the sent items. Yep, that’s right. The rule is server-based and handled at the transport level. The messages will be in their inbox, but not the sent items folder . You can verify this with a message trace. The source context of the forwarded messages will be Transport Rule Agent. The exception to this is if the rule is run manually against existing messages. Those forwarded messages will be in the sent items. The header of each will look similar to this

If you are a Office 365 customer or run Exchange 2016 on-premises, you can mitigate this loophole.


A forward rule and a redirect rule do essentially the same thing except the forwarded message will not have a FW: in the header to indicate to the recipient that the message was forwarded. It will appear to have come directly from the original sender. Well, that’s at least what the official documentation says. From my experience, that is not entirely true. The FW: may not be in the header, but a recipient will be able to see the message was routed through another mail system. It may show “on behalf” or “via” your organization ( Google does this). And of course, if they check the internet headers, the real path will be revealed. Some recipient mail systems may even reject messages forwarded like this.  If the administrator sets forwarding at the mailbox level or a 365 user sets via OWA, it is essentially a redirect.


Preventing user auto-forwarding

  1. Block at the remote domain level: Set-RemoteDomain -Identity ExternalDomain -AutoForwardEnabled:$FALSE. This will stop the Outlook forwarding rules.
  2. For granular control, use a transport rule to prevent by group or user. Otherwise, block the default remote domain for auto-forwarding as above. This will also stop Outlook forwarding rules.
  3. Remove the ability for end-users to auto-forward in Office 365 OWA.

So there you have it. A compilation of the best of forwarding articles with some auto-tuning from me .

My recommendation: Block all auto-forwarding at all levels for users. Leave it to the professionals – your messaging administrators who can enable auto-forwarding at the server level.

I can’t think of too many business requirements for auto-forwarding, but I am sure there are out there.  I hope this little update helps you understand it some more.


Why can’t I point my clients to the DAG Cluster IP?

I was reminded today of a question I used to see a lot in the forums. Not so much anymore, but perhaps a refresher is in order.

Granted, it seems almost brilliant to simply configure all the URLs and connection points to the DAG IP. And after all it does say its use is “Cluster and Client”  😛


and if that means there is no need to worry about load balancing and let Exchange handle it, then why not?

Here’s Why:.

  1. There is no Exchange dependency on the Cluster IP being online. Both Exchange 2013 and Exchange 2016 support IP-Less Database Availability Groups. The cluster IP can go offline and Exchange will run just fine. The only real reason to assign a Cluster IP address is if you are using backup software or another 3rd party application that requires it. If you run Exchange with the Preferred Architecture recommendations, you won’t be doing backups anyway!
  2. If the Cluster name goes offline and the IP with it, Managed Availability won’t attempt to bring it online. That requires manual intervention. Yuck.
  3. The Cluster IP is held by a specific mailbox server in the DAG at any one time – meaning all client connections will go through that multi-role server and no others.
  4. If the quorum owner moves to another server, there is no guarantee that the clients will handle that gracefully.
  5. The only way to prevent a server from end-user client access in this scenario is to pause or stop the cluster service on the affected server.






What really happened to the cast of “Leave it to Beaver” (and a reminder about the DAG Replay Manager)

If you are using lagged copies, you have hopefully also enabled the Replay Manager as well. Once you do so, be aware of the implications. Most notably:

“consider an environment where a given database has 4 copies (3 highly available copies and 1 lagged copy), and the default setting is used for ReplayLagManagerNumAvailableCopies. If a non-lagged copy is out-of-service for any reason (for example, it is suspended, etc.) then the lagged copy will automatically play down its log files in 24 hours.”

To repeat: By default, if a non-lagged copy is out of service for more than a day, the lagged copy of that database will play down its logs and essentially become a HA copy.

So consider this scenario: The servers have a mix of HA and lagged copies on the same drives. One of them encounters some hardware issue, so you suspend all the databases on it and block activation until you can fix the problem, but that’s ok  – there are 3 healthy copies of the databases on other servers. But here is the catch. They have to be 3 HA copies. If it’s two HA copies and one lagged, then log play-down will kick off on those lagged copies after 24 hours if you haven’t changed the default and there goes the suspenders you counting on in case the belt fails.

Sounds obvious, but something that could bite you if you aren’t paying attention and you suddenly realize 2 days later that all the replay queue lengths of the affected databases are at zero, so stay safe out there.




Note that in 2016 CU1, Replay Manager is enabled by default and other goodies!

As for what happened to the cast of “Leave it to Beaver”, well, not much really.