Safe Senders, Spoofing and Office 365. They really can be friends!

One of the rather interesting side effects of moving your mailbox to Exchange Online is the change in behavior of the old trusty Safe Sender list. As Terry points in this blog post from last year, if your mail client trusts only messages sent from a safe sender, all other messages will end up in junk mail. This is change from on-premises – where only messages marked as junk will be marked as SPAM. All others – including the trusted senders- will arrive in the inbox.

For the most part, this is not a big deal, simply inform your end-users of this change once their mailbox has been migrated and let them decide how to handle it; Keep using Safe Senders and whitelist any legitimate senders, or disable it and use the standard junk mail settings in the client.

There are specific scenarios where this could be problematic however. Many organizations have developed internal processes that send reports, alerts and updates anonymously from on-premises systems to their workforce. It’s very common to have dozens to hundreds of these processes, enabled over many years – each sending as an arbitrary SMTP address – essentially spoofing as an authoritative domain.  As I discovered, it’s not as easy as it sounds to ask end-users – especially executives (who rely heavily on the Safe Sender option) to whitelist numerous addresses when it wasn’t required in the past.

Ah, the solution is easy. Just add your authoritative domain to Safe Senders. That will cover you for everything. Not so fast!

One, you can’t add an authoritative domain to the trusted list.

Two, Exchange Online doesn’t honor whitelisted domains anyway.

One possible solution that is the least disruptive to the end-user: Trust those internal processes at the Exchange server level.

Example: Assume you are in hybrid mode and still have an Exchange Server on-prem. Create a receive connector on the Exchange Server. Scope the remote IP addresses to the internal SMTP servers that send these messages to end-users, then check the box ( or use powershell) to set the receive connector you just created as “Externally Secure”.

The receive connector auth and permissions will now look like this:

AuthMechanism           : Tls, ExternalAuthoritative
PermissionGroups        : AnonymousUsers, ExchangeServers


What you see in the headers of a received message:

X-MS-Exchange-Organization-AuthAs: Internal

X-MS-Exchange-Organization-AuthMechanism: 10

In the end, all messages that pass through this connector ( and eventually through the hybrid connector to Office 365) will be considered authenticated and will not be sent to junk mail – even if the sender is not in the Safe Sender List. Boom!

P.S. This is only an example. Do not enable this option if you do not trust or have control of the sending servers.




Abs of Steel and My DKIM Body Hash Won’t Verify. Help Me Dr. Phil!

If you are applying an *inbound* disclaimer with a mail flow rule in Office 365, you may be surprised to see a DKIM body hash failure in the header of the message. ( and if you have never noticed this, well, that’s understandable!)



Message sent from Gmail with disclaimer rule:


Authentication-Results spf=pass (sender IP is;; dkim=pass (signature was verified);; dmarc=pass action=none;; dkim=fail (body hash did not verify);





Message sent from Gmail w/o disclaimer rule:


Authentication-Results spf=pass (sender IP is;; dkim=pass (signature was verified);; dmarc=pass action=none;; dkim=pass (signature was verified);


This brings up some interesting questions. Is the DKIM check *after* mail flow rules are processed? And does this mean I have lost the ability to check for DKIM failures for those messages?

Thankfully, no on both. I have confirmed that you can ignore the failure and have been assured that mail flow rules are evaluated after DKIM verification.

As you can see in my examples, in both cases, the first DKIM check passes and just as importantly, DMARC passes and that is what you should be hanging your hat on.

Stay safe out there!

Designing My Office 365 Tenant for On-Premises

With the new Exchange Online storage limits, I got to thinking what that would look if I were to do this on-prem.

Let’s see, 2GB mailboxes to start… 100GB max sixe – no archive ( Why would I need an archive?)…16,000 mailboxes. Follow the Preferred Architecture otherwise: DAG, 4 copies of the databases, one lagged, JBOD, etc…

Run that through the latest mailbox calculator and wait for the big reveal.


Twelve Mailboxes per Database!


144 Databases in the DAG. I can handle that. Yes, 12 mailboxes in each. Got it.

Much less RAM than you might think. I have never considered using a “Lagged Copy Server” though.

Ouch. These giant mailboxes will need a bunch of storage! Not sure I am comfortable with databases that size. I am beginning to lose my nerve.

I think I will take the third option and leave the 100GB mailboxes to the Exchange Online professionals. You know who you are.


Auto-Forwarding is not dead. It’s very happy!

Auto-Forwarding from Exchange. Now there’s a subject that has been beat to death. We all know how it works and it has certainly been documented enough don’t you think?

Posts from Tim and others are worthy reads. My goal here is to simply put all of this together and maybe point out some things you may not know.

There have traditionally been two options to forward from Exchange: Outlook Rules or Administrator-enabled forwarding.

  1. Outlook Rules: End users have 3 options as illustrated in the image below. Of course, only one can be chosen, but I have checked all three to point them out. Additionally, you can select an existing user from the GAL or contacts or enter a SMTP address ad-hoc.  Regardless of which option in the rule is selected, the messages will be delivered to the user’s inbox.




2. Administrator enabled Auto-Forwarding:

This is done using EAC or Exchange Powershell.

In EAC, under Delivery options for the Mailbox.

Note here that the recipient has to exist in the Address Book.

With Powershell and set-mailbox, you have additional options:

The ForwardingAddress parameter specifies a forwarding address for messages that are sent to this mailbox. A valid value for this parameter is a recipient in your organization. You can use any value that uniquely identifies the recipient.

The ForwardingSmtpAddress parameter specifies a forwarding SMTP address for messages that are sent to this mailbox. Typically, you use this parameter to specify external email addresses that aren’t validated.

Set-Mailbox -Identity "John Woods" -DeliverToMailboxAndForward $true -ForwardingSMTPAddress
 Note the important difference and the ability to include *or* exclude delivery to the mailbox.

How messages are delivered and forwarded is controlled by the DeliverToMailboxAndForward parameter.

  • DeliverToMailboxAndForward is $true   Messages are delivered to this mailbox and forwarded to the specified recipient.
  • DeliverToMailboxAndForward is $false   Messages are only forwarded to the specified recipient. Messages aren’t delivered to this mailbox.

Office 365 Outlook Web Access adds an additional wrinkle here. End-users have access to a trimmed down version of the administrator forwarding option.

This is not an Outlook rule, but similar to :
Set-Mailbox -Identity “John Woods” -DeliverToMailboxAndForward $true -ForwardingSMTPAddress

 Of course, Office 365 users can still create rules for forward in Outlook as well.


Why you may not want to forward to external recipients

  1. Data leaks, typically unnoticed, to mailboxes you do not control.
  2. A forwarding rule could create a mail loop between your org and another.
  3. Forwarded messages could land your sending IP addresses on block lists.
  4. Forwarding could bypass your data retention requirements. *


Things you may not know about forwarding

*Outlook forwarding rules allow the message to bypass the sent items. Yep, that’s right. The rule is server-based and handled at the transport level. The messages will be in their inbox, but not the sent items folder . You can verify this with a message trace. The source context of the forwarded messages will be Transport Rule Agent. The exception to this is if the rule is run manually against existing messages. Those forwarded messages will be in the sent items. The header of each will look similar to this

If you are a Office 365 customer or run Exchange 2016 on-premises, you can mitigate this loophole.


A forward rule and a redirect rule do essentially the same thing except the forwarded message will not have a FW: in the header to indicate to the recipient that the message was forwarded. It will appear to have come directly from the original sender. Well, that’s at least what the official documentation says. From my experience, that is not entirely true. The FW: may not be in the header, but a recipient will be able to see the message was routed through another mail system. It may show “on behalf” or “via” your organization ( Google does this). And of course, if they check the internet headers, the real path will be revealed. Some recipient mail systems may even reject messages forwarded like this.  If the administrator sets forwarding at the mailbox level or a 365 user sets via OWA, it is essentially a redirect.


Preventing user auto-forwarding

  1. Block at the remote domain level: Set-RemoteDomain -Identity ExternalDomain -AutoForwardEnabled:$FALSE. This will stop the Outlook forwarding rules.
  2. For granular control, use a transport rule to prevent by group or user. Otherwise, block the default remote domain for auto-forwarding as above. This will also stop Outlook forwarding rules.
  3. Remove the ability for end-users to auto-forward in Office 365 OWA.

So there you have it. A compilation of the best of forwarding articles with some auto-tuning from me .

My recommendation: Block all auto-forwarding at all levels for users. Leave it to the professionals – your messaging administrators who can enable auto-forwarding at the server level.

I can’t think of too many business requirements for auto-forwarding, but I am sure there are out there.  I hope this little update helps you understand it some more.


Selectors: The Magic Sauce of DKIM

One question I see a lot is “How can I let 3rd party vendors send as our organization using DKIM?” It’s a lot easier than you think.

The trick is in the selector. Per RFC 6376 To support multiple concurrent public keys per signing domain, the
key namespace is subdivided using “selectors”.  

Implementing this is pretty straight-forward, so let’s get started.


Suppose you have your existing DKIM infrastructure handled by Office 365/ EOP.

When sending a message through Office 365/EOP, the header of the message is stamped with the required DKIM fields.

Check out the sample header in the received message below. Note the s=selector1. This tells the receiving server to check :

DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=qwJgpoXgR3MRDrSVO91kT+tYSpE//LjikNGicqlKjU0=; b=FnK8HjJFfEKHMq5EoIGJVzty4w+v7uE0UmQVFrVYr348e4tqfE66U/pZanlNfS7guhj2T5g5sqva7w1Wc1/+NOlC6CEBMrQiuFVDo0Akk8narhX9r9xs99Yniv…


In your organization’s external DNS, you have a CNAME record of that selector:    canonical name =

Following the DNS pointer…

In the Office 365 DNS is something like this text record with the public signing key:       text =


The receiving server can now run it’s calculations against the message knowing the public signing key.

So you can see where we are going with this.

If you want a 3rd party vendor authorized to send as your company and apply a DKIM key to each message, you have a few options:

Create a unique selector CNAME – different from the one you use for messages coming from your organization – in external DNS that points to the 3rd party vendor’s DNS which contains the public DKIM signing key. This is similar to what Office 365 tenants do.


Use a unique selector and create the DNS text record that has the public DKIM signing key provided by the vendor. Remember: They are generating the messages, so the 3rd party vendor has the private key, you do not!


Each method will work and it’s really up to you. Note that if you decide to create the text record in your DNS with the public key signing key, it will break DKIM for those messages if the 3rd party vendor decides to change the private signing key that they hold.

I think it goes without saying that the one thing you don’t want to do is provide “your” private signing key to a 3rd party vendor and have them sign messages using your “regular” selector – the one you use for messages that actually do come from your domain. At least I wouldn’t recommend that.

Once this is all setup, then it’s up to the 3rd party to set the selector correctly in the message header. So, if EOP is stamping “selector1” on all outbound messages, the 3rd party vendor can use anything allowed by RFC except selector1.

As an example, headers received from the vendor, sending as you, may stamp it with:

DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;; s=contosoBULK

Receiving servers will now check the text record: and depending on how you set it up, obtain the public signing key or get redirected by CNAME to another DNS.

This also works great for subdomains – i.e. have the 3rd party send as and setup the DKIM records for that specific SMTP domain.

There is no real limit to the number of selectors one domain can support, just ensure they are unique to each sender and are configured properly so receiving systems can correctly access the DKIM public signing key.

With the advent of so many cloud services, I suspect just about every organization has at least one 3rd party sending as their SMTP domain, so get your DKIM ( and SPF records!) right. I hope this helps understand that process a little bit better.

For more info about DMARC/SPF and DKIM:

The Trinity of Email Protection: Lessons Learned using DMARC, DKIM and SPF in Office 365

My Top Five Lackluster Band Names

My personal list of the top 5 band names that show really no attempt to come up with anything clever or original. My criteria? None really. But these names have always annoyed me. I guess it doesn’t take much.

5. The Cars: Do I need to say more? I love these guys as much as the next person, but this is all they could come up with? Good luck getting Alexa to figure it out.  Be prepared to hear Gary Numan.

4. Mr. Mister: Why oh why?  I know they chose that name as a joke, but it felt so stupid to even say it.

3. Train: Right up there with “The Cars”.

2. Yes: No.

And the most lackluster…

1. J. Giles Band: Naming after a founding member is not unusual or bad in of itself, but usually it’s the artist most closely associated with the group. You know, that person that stands out or clearly represents them to the world. No, not this one. How many people even know which one is Mr. Giles? Even worse, one of the members is named “Magic Dick”.

Why can’t I point my clients to the DAG Cluster IP?

I was reminded today of a question I used to see a lot in the forums. Not so much anymore, but perhaps a refresher is in order.

Granted, it seems almost brilliant to simply configure all the URLs and connection points to the DAG IP. And after all it does say its use is “Cluster and Client”  😛


and if that means there is no need to worry about load balancing and let Exchange handle it, then why not?

Here’s Why:.

  1. There is no Exchange dependency on the Cluster IP being online. Both Exchange 2013 and Exchange 2016 support IP-Less Database Availability Groups. The cluster IP can go offline and Exchange will run just fine. The only real reason to assign a Cluster IP address is if you are using backup software or another 3rd party application that requires it. If you run Exchange with the Preferred Architecture recommendations, you won’t be doing backups anyway!
  2. If the Cluster name goes offline and the IP with it, Managed Availability won’t attempt to bring it online. That requires manual intervention. Yuck.
  3. The Cluster IP is held by a specific mailbox server in the DAG at any one time – meaning all client connections will go through that multi-role server and no others.
  4. If the quorum owner moves to another server, there is no guarantee that the clients will handle that gracefully.
  5. The only way to prevent a server from end-user client access in this scenario is to pause or stop the cluster service on the affected server.






What really happened to the cast of “Leave it to Beaver” (and a reminder about the DAG Replay Manager)

If you are using lagged copies, you have hopefully also enabled the Replay Manager as well. Once you do so, be aware of the implications. Most notably:

“consider an environment where a given database has 4 copies (3 highly available copies and 1 lagged copy), and the default setting is used for ReplayLagManagerNumAvailableCopies. If a non-lagged copy is out-of-service for any reason (for example, it is suspended, etc.) then the lagged copy will automatically play down its log files in 24 hours.”

To repeat: By default, if a non-lagged copy is out of service for more than a day, the lagged copy of that database will play down its logs and essentially become a HA copy.

So consider this scenario: The servers have a mix of HA and lagged copies on the same drives. One of them encounters some hardware issue, so you suspend all the databases on it and block activation until you can fix the problem, but that’s ok  – there are 3 healthy copies of the databases on other servers. But here is the catch. They have to be 3 HA copies. If it’s two HA copies and one lagged, then log play-down will kick off on those lagged copies after 24 hours if you haven’t changed the default and there goes the suspenders you counting on in case the belt fails.

Sounds obvious, but something that could bite you if you aren’t paying attention and you suddenly realize 2 days later that all the replay queue lengths of the affected databases are at zero, so stay safe out there.




Note that in 2016 CU1, Replay Manager is enabled by default and other goodies!

As for what happened to the cast of “Leave it to Beaver”, well, not much really.




My Top 5 Exchange Experts to Follow and 2 I Wish I Could

In the spirit of making meaningless lists , I thought I would put together my own compilation. These are in no particular order or rank.

Five to Follow

  1. Paul Cunningham: Paul is my go-to, how-to guy. His blog posts are informative, easy to read and hit the mark. He is the only Australian I know. That counts for something.
  2. Tony Redmond: No explanation needed here. I have followed Tony since my 5.5 days, and believe me, it makes him nervous. I was there when he announced that he had passed the “Clap” to the Exchange Product Group. I think I should get a t-shirt for that.
  3. Andrew S Higginbotham: I love his blog posts. A lot of common-sense fixes for those annoying issues we all run into. He’s younger than me and that pisses me off.
  4. Jeff Guillet: Jeff has the uncanny ability to always have a blog post ready just when its needed. And don’t forget to read his ADFS stuff as well! You will typically find Jeff at Ignite sessions propped up against a wall near the front.
  5. Paul Robichaux: Probably the best dressed MVP. I love listening to Paul talk. He has a very reassuring  manner and tone. We all know how good he is, no explanation needed for his inclusion here either.

Two I Wish I Could Follow

  1. Ed Crowley: Ed has been doing this stuff a long time so I’m sure he has no desire to be followed by anyone. I would never physically follow him however, that will only lead to some bus that takes 5 hours to get to the conference just to save a few bucks.
  2. Rich Matheisen: The original Exchange NewsGroup King, Rich has retired from both work and MVP-dom. I learned more about the SMTP RFCs from him than I can ever thank him for. Enjoy your retirement, Richard.


I left a lot of people off this list of course, including myself. 😛

It’s safe to say that all the Exchange MVPs I know and love are worth following and listening to, well, except a few. That list is only viewable at Joey’s in Bellevue, WA.



Sanity Checking Lagged Copies – To SIR* With Love

I seem to recall a presenter posing a question about lagged copies at a recent MEC conference, or maybe it was last year at Ignite. Anyway, the speaker asked for a show of hands if one was using Exchange lagged copies in their org and the number was, well… you could count them on your hands. Hopefully that has increased since then. Personally I don’t see why you wouldn’t use lagged copies if you are going to go the HA route. I’ll concede that a nice wizard to activate the lagged copy would be optimal, but nonetheless with documentation and defined procedures, an experienced admin can get over any fear they may have going backup-less. (Is that a word?)

If you decide to use lagged copies, there are already a number of good tutorials out there. I like my friend Paul’s easy to read article:

Once you are setup, you will hopefully never need to look at them again, but if you aren’t so lucky and experience any sort of event that requires a lagged copy activation or log replay either through admin intervention or by Exchange itself ** – or you just want to periodically ensure things are level-set things, here some things to check post-outage/problem/log play-down/just because:


1. Get-MailboxDatabase * | ? {$_.CircularLoggingEnabled -eq $false}

Should return no results. I assume you are lagging for a reason right? Hopefully To get rid of backups. No backups, no log truncation. So you need to enable circular logging.


2. Get-MailboxDatabaseCopyStatus * | ? {$_.ActivationPreference -eq “4”} | select Name, Status, *queuelength*, LastInspectedLogTime, ContentIndexState, ReplayLagStatus,ActivationSuspended,ActionInitiator,ActiveCopy | OGV

Output this to a sortable grid view for a quick and easy check. Note: _.ActivationPreference -eq “4”. The assumption here is that you are running 4 copies. 3 HA, 1 lagged. If not, check based upon whatever activation level your lagged copies are set to.

You should see something like the image below. It nice and sortable and allows for quick verification.



What to look for:

Status: Healthy

CopyQueueLength: 0 or close to it

ReplayQueueLength: above 0. Remember, you are checking just the lagged copies here, so each should have a replay queue length.

ContentIndexState: Healthy

ReplayLagStatus: Enabled:True; PlayDownReason:None; Percentage:100; Configured:8.00:00:00 (Actual: Equal or above the Configured – in this example, lag relay is set to 8 days). If you see a copy with a PlayDown reason, it’s time to investigate.

ActivationSuspended: True (assuming you have blocked automatic activation on the lagged copies)

ActionInitiator: Administrator (assuming you have blocked automatic activation on the lagged copies)

ActiveCopy: False.

If any copies are not set correctly to your desired settings. Correct them!



Set lagged replay on 4th Preference DB to 7 days: Set-MailboxDatabaseCopy <DB>\<Server> -ActivationPreference 4 -ReplayLagTime  7.0:0:0

Disable Automatic activation for lagged database copy: Get-MailboxDatabaseCopyStatus <DB>\<Server> | Suspend-MailboxDatabaseCopy -ActivationOnly

Enable Circular Logging on the Database: Set-MailboxDatabase <DB> -CircularLoggingEnabled $true


* SIR= Single Item Retention. Recommended that you enable this for all mailboxes in a lagged environment running w/o backups. Belts and Suspenders.

** The Replay Lag Manager should be enabled in your environment. Be aware that under certain conditions, Exchange may automatically play down the lagged copies.