Monthly Archives: June 2014

DSN 5.1.1 Office 365 User could not email on-premise user

Had a strange issue where a user mailbox was created in Office 365 before Dirsync was enabled.

After dirsync was enabled and the domain name was validated, the same primary SMTP alias existed in two places: (1) on-premise where the real mailbox resided and (2) in the cloud where the POC/Pilot mailbox temporarily resided.

The problem that happened was cloud users attempting to email the on-premise mailbox would not get delivered on-premise, because the SMTP address matched against the cloud mailbox.

After removing the license from the cloud user, the mailbox was removed, but the cloud users then got a DSN 5.1.1 NDR undeliverable bounce-back message.

The solution was described in this o365 community forum thread:

http://community.office365.com/en-us/f/613/t/238038.aspx

Essentially it was necessary to remove the msol-user entirely and then let dirsync re-create the mail-user object. Problem solved!

To confirm the symptom was happening, running a get-mailuser in the remote powershell resulted in no results returned whereas it should have had a cloud mailuser even for an on-premise mailbox. This is why the DSN was getting generated.

One work-around that seemed to work was also to set the domain in the cloud to internal-relay instead of the default authoritative but that didn’t seem the cleanest way to solve the problem, even though that seems to be the required configuration during a hybrid migration.  http://support.microsoft.com/kb/2730609

Combined Powershell script for managing both Azure AD and Exchange Online

_________________BEGIN Connect.ps1________________________

$LiveCred = Get-Credential
$Session = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri https://ps.outlook.com/powershell/ -Credential $LiveCred -Authentication Basic –AllowRedirection
Import-PSSession $Session -AllowClobber
connect-msolservice -credential $LiveCred
#Remove-PSSession $Session

__________________END Connect.ps1_________________________

 

The above script connects to two services: (1) Azure Active Directory remote powershell and (2) Exchange Online remote powershell.

This is useful because the former is required to assign and manage licenses to Dirsync’d users in Office 365, and the later is required for managing mailboxes and mailbox moves in Exchange Online.

By combining the two sessions into a single powershell session, it is easier to administer and only have a single powershell window open.

One of the most common misconceptions about mailbox moves to Exchange Online with powershell is that people do not realize that you must run the move in a remote powershell session (see move script below for an example).

One of the most common tasks when getting started with Office 365 is to bulk license users based on a CSV file containing email addresses. The maintenance script below was created to perform multiple actions based on a source CSV file.

___________BEGIN Maintenance.ps1 ___________________

Import-csv c:\users.csv| foreach {

$UPN = $_.email

#The line below is great for testing the CSV file match against Cloud UPN. Helps you understand if your CSV file email addresses are matched up perfectly against cloud UPN addresses.

#get-Msoluser -UserPrincipalName $UPN

#the next line is great for getting unlicensed users. This helps you identify any unlicensed users that need a license applied.

#get-msoluser -UserPrincipalName $UPN | where {$_.IsLicensed -eq $false}

#The line below sets usage location and is required for every user.

#set-msoluser -userprincipalname $UPN -UsageLocation US

#The next two lines assign licenses. In order to get <tenant name> you run this command: get-msolaccountsku (remove the <>)

#$MSOLSKU = “<tenant name>:ENTERPRISEPACK”

#Set-MsolUserLicense -UserPrincipalName $UPN -Addlicenses $MSOLSKU

}

___________END Maintenance.ps1 ___________________

 

Now that you have licensed your users, it is now time to move mailboxes! (Assumes you have already completed the steps in the Exchange Deployment Assistant for configuring a Hybrid environment).

_______________Move Script.ps1_______________

#When prompted, enter your on-premise AD username and password like Domain\User that is a member of the Exchange Organizational Admins group

#Remember – this script is to be called from within a remote powershell session against Exchange Online, not using your on-premise Exchange Management shell!

$cred = get-credential

Import-csv .\user.csv | foreach {

$UPN = $_.Email

New-MoveRequest -identity $UPN -Remote -RemoteHostname ‘myhybridserver.mydomain.com’ -RemoteCredential $cred -TargetDeliveryDomain ‘mytenantname.mail.onmicrosoft.com’ -BadItemLimit 100 -AcceptLargeDataLoss -LargeItemLimit 100 -SuspendWhenReadyToComplete

}

_______________End Move Script.ps1_______________

 

Tips and Tricks

  1. After you’ve completed the tasks you wanted to perform in the Exchange Online organization, you need to disconnect the session between your local computer and the Exchange Online organization.

Use the following command to disconnect remote PowerShell from the Exchange Online organization.

Remove-PSSession $Session

If you close the remote Windows PowerShell window without following this procedure, the session will have to time out (in approx 15 minutes), and the quota for the maximum number of concurrent connections may prevent you from connecting back to the service on a timely basis (maximum of 3 connections are allowed)

2. If you are setting up a new o365 tenant, and your on-premise AD domain has a default UPN like “myad.local” then you can configure Directory Sync to use an alternate login ID such as the mail attribute so that the email address is mapped to the UPN field in o365. This is beneficial because it saves the effort of changing UPN Id’s on-premise!

http://social.technet.microsoft.com/wiki/contents/articles/24096.using-alternate-login-ids-with-azure-active-directory.aspx

Recent change to Dirsync

It is also important to note that starting with DirSync version 6862.0000 released on June 5 2014 there is no longer a DirSyncConfigShell Console file in the Program Files folder. Instead you just start a normal PowerShell window and run Import-Module DirSync. After that the Start-OnlineCoexistenceSync cmdlet is available.

Common Dirsync Questions

  • Even though Dirsync is configured to sync by default once every three hours, you can manually force dirsync to run at any time.
  • You can also configure the default interval to run in shorter increments
  • The default interval for Dirsync is a completely separate interval than password synchronization. Passwords are synced immediately to Azure AD and the average time before they are effective is usually under 3 minutes.

Minimum Exchange Hybrid Server Requirements for Managing On-Premises Users

Recently I was trying to locate guidance for the minimum requirements that an Exchange Hybrid Server would need if the only purpose for the server was to manage on-premise remote mailboxes. An on-premise Hybrid Exchange Server is still beneficial to manage the proxy alias attribute since Directory Synchronization is mostly one direction and therefore you cannot update the proxy aliases for a mailbox in Office 365’s administrative portal. You can use ADSIEdit to manage proxy aliases on-premise, but that is not practical for large organizations wishing to use RBAC.

So I posted this question on the new Office 365 IT Pro Yammer group and got a quick response from an MVP named Steve Goodman:

“An Exchange 2010 Hub Transport server role or Exchange 2013 multi role – with Hybrid keys – will do the trick.
After install you can then manage users, which will show as remote mailboxes (within contacts) in 2010 and Office 365 mailboxes in 2013.
Add a remote domain and other acceptors domains in Exchange and set the remote domain as the Office 365 tenant domain. Set the accepted domains as internal relay. Alter email address policies to suit, as they will take effect as you manage or create users.
If you use a multi-role or CAS server beware the AutoDiscover SCP as it will cause cert warnings. Set it to $null using Set-ClientAccessServer <server> -AutoDiscoverServiceInternalURI:$null
More guidance in [Steve Goodman’s] article here http://searchexchange.techtarget.com/tip/Best-practices-for-managing-Office-365-from-Active-Directory

So I learned that you do not have to run the Hybrid Configuration wizard.

Steve’s blog post does not include the syntax of creating a new remote domain. I used powershell to create the remote domain:

New-RemoteDomain –Name contoso.mail.onmicrosoft.com

Set-RemoteDomain -Identity contoso.mail.onmicrosoft.com -TargetDeliveryDomain $true

Then according to this MSFT Blog, if you want the changes to take effect immediately you have to restart IIS.

Steve points out in his blog that another alternative to ADSIEdit or the Hybrid server for managing the proxy aliases is a PowerShell module written by Andreas Lindhal at 365lab.com.

The only thing I would add to Steve’s guidance is that you may need to convert some of the mailboxes to remote-mailboxes using the enable-remotemailbox command otherwise the local contact object won’t exist in the local AD to manage.

Offline Root CA’s require periodic maintenance

In most environments where an offline Root CA is used, it must come back online once every 7 months to provide the Subordinate CA’s with an update CRL list. If this does not happen, the Subordinate CA will stop issuing certificates. The actual CA service on the Subordinate will no longer startup and the error message will be “The revocation function was unable to check revocation because the revocation server was offline”

I recommend performing the following steps every 6 months (to allow for a 30 day cushion)

1. Power up the Offline Root CA

2. On the Offline Root, run this command:
c:\windows\system32\certsrv\certenroll\certutil –crl

3. The command above will re-issue the CRL. Now copy the CRL from the c:\windows\system32\certsrv\certenroll directory to the Subordinate Issuing CA

4. The next step is to install the CRL into the Subordinate CA with this command:

Certutil –addstore CA <name of file>

CA best practices and maintenance procedures are located here:
http://technet.microsoft.com/en-us/library/cc782041(v=ws.10).aspx

How to prevent Ransomware from infecting your Enterprise Applications

Everyone has heard of Spyware and Malware. Ransomware is becoming an all too familiar term but I feel many IT Organizations assume it is a threat isolated to consumers and not Enterprises. In my opinion, I think most IT Organizations are uneducated about the attack vectors that Ransomware can use to infect an IT Infrastructure.

Case in point, most companies that I interact with do not prevent their IT System Administrators from using Internet Explorer (or other web browsers) from the console of their servers. Only a handful of companies that I have encountered over the years actually restrict outbound TCP connections on the firewall to thwart IT Sys Admins from web browsing on server consoles.

Why is this significant, and how does this behavior relate to the topic of Ransomware? This is the attack vector that most IT Organizations are unaware of. Most of the IT Systems Administrators that I have encountered have justified their behavior of using a web browser on a server by stating that they are smart enough to only browse “Safe” websites to download hotfixes, patches, or search for error messages on IT forums. It is that false assumption that can allow Ransomware to infect an Enterprise. I will explain below how an Enterprise Application such as Microsoft Exchange Server could be taken down by such behavior.

The alarm needs to be sounded because leading security researchers have proven that the most successful attack vectors are being exploited by hackers who are placing advertisements on legitimate web sites. You could be browsing a completely legitimate and “Trusted” web site, but because of an advertisement that contains malicious code, your web browser is now the attack vector that downloads an attack payload into your Infrastructure! Today on June 10th 2014, Microsoft released hotfixes for 59 vulnerabilities in Internet Explorer. This shows you that attackers are going after the web browser to target the enterprise! Hackers are smart enough to not hit an Enterprise head on by attacking the firewall. Instead they target the weak points in the infrastructure, namely the end user who is browsing legitimate web sites. Some of these vulnerabilities are “zero day”, meaning that attackers have discovered the vulnerability before the good guys and no patch is available to fix the problem. These types of lurking vulnerabilities can lay dormant on a web server for weeks or months before being discovered.

Now, imagine if one of your Domain Administrators browsed a legitimate web site which contained an advertisement placed by a hacker?  It is safe to assume that any server that Domain Admin had access could now be “owned” by Ransomware, because most of the recent advanced persistent threats (APT’s) spread by multiple attack vectors once they infect just a single host.  Once ransomware lands on a host, the only way to unlock the data is to pay the ransom! When searching for products to remove the ransomware, use caution, because most of these so called cures are actually viruses that masquerade as ransomware removal tools!

I think most readers would agree with me that we are now talking about a very real scenario, because we are talking about legitimate websites that have been compromised with advertisements. IT Sys Admins that use privileged accounts and perform web browsing to search for solutions to error messages (a common IT Sysadmin task) are the most at risk.  They are exposed when browsing to download patches or drivers onto a server from the internet, because it is more convenient for them than copying it over the network from their workstation.

I highly recommend reading this Cyber Heist newsletter (not from your server console, and not while logged in with your Domain Administrator account!). In this newsletter, the author describes the latest advances in ransomware and I promise it will open your eyes to just how bad things have gotten! I don’t blame you if you were too paranoid to click on the link after reading this blog. =)

 

The threat to Enterprise Applications: Case Study: Microsoft Exchange

The Microsoft Exchange “Preferred Architecture” was published by Microsoft on April 21st 2014 and recommends against traditional backups. I think you know where this is going if you read the Cyber Heist Newsletter referenced above.

“With all of these technologies in play, traditional backups are unnecessary; as a result, the PA leverages Exchange Native Data Protection.”

Gulp.

The limitation of Exchange Native Data Protection (mailbox replication) is that all copies of the mailbox data are accessible from the Layer3 IP network (a requirement for replication to work). The doomsday scenario is a worm or skilled hacker could destroy or “ransom” all copies of the data. This would leave an organization with 100% data loss. Not only is Office 365 susceptible to this threat, but all customers who follow Microsoft’s preferred architecture.

Therefore, Exchange Administrators should carefully consider the risk of a worm or hacker before completely eliminating traditional backups. All other layers in your defense in depth security apparatus better be air tight! For example, you would have less risk if you deploy a whitelisting solutions from Bit9 Lumension, or Microsoft Applocker. However, it’s nearly impossible to eliminate all risk because according to McAfee Phishing Quiz, 65 percent of respondents can’t properly identify email scams. Theoretically, the human responsible for making decisions on what to allow into the whitelist could be tricked into allowing ransomware to be trusted.

 

Prevention

  1. To reduce the risk of ransomware spreading to servers, prevent IT Administrators from being able to browse web pages while logged onto a server. If servers are located in a separate IP subnet, create an ACL to block outbound 80 and 443 requests from the server subnet. The caveat is you could potentially break applications that rely on external connections to the internet. Therefore you could enable the ACL with logging mode enabled so you could then create a whitelist of allowed sites and then block everything else. The downside is this will increase the administrative burden of the firewall administrator to maintain the ACL. However, the alternative of permitting an IT Administrator to browse websites while logged onto servers is to accept the risk of of infecting the entire Server farm with a worm, virus or Ransomware.
  2. Create an IT Policy for Administrators to sign where they will not browse the internet using privileged accounts such as Domain Admin credentials on any workstation. Consider deploying a proxy server that uses Radius or Windows Authentication, and only allow a global group that does not contain these admin accounts.
  3. Research commercially available whitelisting solutions (ex: Bit9 Lumension, or Microsoft Applocker).

This approach would not prevent all worms, ransomware and hackers from getting onto your servers, because modern advanced persistent threats (APTs) will spread and distribute themselves across multiple attack vectors. For example, just one infected laptop that has IP connectivity to the back-end servers could spread by taking advantage of a vulnerability in an unpatched 3rd party application. Even unpatched security products from the top security vendors have ironically been used to infiltrate a server. Therefore, Kevin Mitnick style security awareness training is also recommended.

 

Disclaimer: This blog post is for educational use only. Both myself and my employer are not responsible for any actions you take or do not take as a result of reading this blog post.