Monthly Archives: February 2013

Microsoft Office 365 Federation Metadata Update Automation Installation Tool

What is the Microsoft Office 365 Federation Metadata Update Automation Installation Tool?

This tool automates an otherwise manual process, which if not performed, would prevent all users from signing into Office 365 when the token signing certificate expires (once per year). This tool is a PowerShell script that creates a scheduled task to tell Office 365 to trust the self-signed certificate.
Get the tool:
http://gallery.technet.microsoft.com/scriptcenter/Office-365-Federation-27410bdc 

[Update 2/15/2013]
It turns out that it is still necessary to restart the internal ADFS service after the token signing certificate has been issued.

Who needs the tool?

All Office 365 customers that have implemented Single-Sign-On with ADFS 2.0 must update their token signing certificate every year otherwise users will be unable to sign in. They would all benefit from this tool, otherwise they have to predict when the cert expires, then follow a manual process to trust the new cert.

What if the tool is not installed? What is the manual process?

I welcome this tool. Last year, I blogged about the manual steps to predict when the token signing certificate must be installed.
http://blogs.catapultsystems.com/IT/archive/2012/03/07/cannot-sign-on-to-office-365.aspx
Fixing the problem is not too difficult, however, preventing the problem from occurring is actually somewhat confusing. So I highly recommend all customers to use this tool!
If you do not want to run the tool, here is what you must do:

1. Find out when your existing token signing cert will expire.

2. Subtract 20 days from the expiration date.

3. From that date, ADFS will automatically issue a new certificate that will co-exist with the primary certificate for 5 days (this is the default period, but it can be configured to be a longer period). At the end of that 5 day period, the new token signing certificate is made primary, and this actually disrupts service until someone takes manual action to run a PowerShell command to force Office 365 to trust the new cert. This is necessary because it is a self-signed certificate and therefore, o365 needs to be informed by someone (or an automated task) that the cert has changed. This is exactly what the tool above helps automate, so that if someone does not predict the date correctly, it will avoid an outage.

For example, this is what you will see when you are in the 5 day period when the new cert has been automatically issued but it has not yet been made the primary cert. The old is ‘IsPrimary’ = True, and the new one is there but it is not yet the Primary.
Launch Powershell.
Type the following:
Add-PSSnapin Microsoft.Adfs.PowerShell
Get-ADFSCertificate –CertificateType token-signing

What happens if I ignore the expiration date of the token signing cert?

You’ll find out – your users will call you when they are unable to sign into Outlook, or Outlook Web Access. They will get an error “There was a problem accessing the site. Try to browse the site again.”

What does the tool require?

  • You must make sure that you have installed the latest version of the Microsoft Online Services Module for Windows PowerShell
  • You need to have a functioning AD FS 2.0 Federation Service
  • You need to have access to Global Administrator credentials for your Office 365 tenant
  • You need to have at least one verified domain in the Office 365 tenant must be of type ‘Federated’
  • This tool must be executed on a writable Federation Server
  • The currently logged on user must be a member of the local Administrators group
  • The Microsoft Online Services Module for Windows PowerShell must be installed. You can download the module from http://onlinehelp.microsoft.com/en-us/office365-enterprises/ff652560.aspx
Running the tool

After you download the tool (aka powershell script) onto your internal ADFS server, you need to right-click on it and unblock it. Otherwise you will get errors like “the script is not digitally signed. The script will not execute on the system.”

Also, if you get an error that “Failed MSOL credential validation.” it is because you are running the script in the regular Windows Powershell or ADFS PowerShell module.  You need to make sure you run this in the window “Microsoft Online Services Module for Windows PowerShell” that looks like this on the desktop:

Then just change directory to the location of where you saved the script and run the script.

Verifying it worked

Launch Task Scheduler. You will see the new task has been scheduled to run at midnight every day.

Recommendation: Because the scheduled task will run under the account that you were logged in with, you will need to remember to update this scheduled task whenever you change your password, or run the tool with a service account with a non-expiring password (best bet!/recommended!). It would totally defeat the purpose of going to all this effort only to have the script not run when you are counting on it because the password was changed and the scheduled task failed.

How to recover missing emails in Office 365

When an email is deleted, where does it go? It goes to the Deleted Items folder.
When the deleted items folder is emptied, where does it go? It goes to a hidden folder called deletions. The duration that deleted items remain in this folder is based on the deleted item retention settings configured for the mailbox database or the mailbox. By default, a Exchange 2010 mailbox database is configured to retain deleted items for 14 days, and the recoverable items warning quota and recoverable items quota are set to 20 gigabytes (GB) and 30 GB respectively. These are configurable settings with Exchange on-premise:
http://technet.microsoft.com/en-us/library/ee364752.aspx
With Exchange Online, Plan 2, you can increase this from 14 to 30 days. The Recoverable Items folder does not count against the user’s primary mailbox.
http://jorgerdiaz.wordpress.com/2012/07/19/office-365-changes-legal-hold-and-single-item-recovery/
Note: These items can still be recovered by the end-user by highlighting the folder and clicking ‘Recover Deleted Items.’

When this recoverable items folder is purged, where do those emails go?
It depends on whether single-item recovery has been enabled on the mailbox. When Single-item recovery is enabled on a mailbox and the recoverable items folder is emptied, these items remain in a hidden folder that the user cannot alter in any way: Recoverable Items\Purges.
Two mechanisms can be used to configure Single Item Recovery in Exchange 2010:

  • rolling legal hold = Time limited safeguarding of data where the items are stored in the Recoverable Items folder based on a predefined retention period. In this case, the retention period is set per mailbox (or the mailbox database defaults will apply if a specific value is not set for the mailbox).
  • litigation hold = Unlimited safeguarding of data -where Items in the recovery folder will never be purged. Retention period and quota limitation set on a “litigation hold” mailbox will be ignored. This would ensure that deleted mailbox items and record changes won’t be purged.

The following example assigns a 7 year rolling legal hold on a mailbox. It is important to note the mailbox won’t be on Legal Hold for 7 years, this is actually a tag stating any new message will be retained for 7 years once created or received by the mailbox. So a message that arrives on 2.6.2013 will be kept until 2.6.2020.

Set-Mailbox –identity [email protected] –LitigationHoldEnabled $True –LitigationHoldDuration 2555

With Single Item Recovery enabled, items will remain in the Recoverable Items\Purges folder even if the mailbox owner deletes items from their inbox, empties the Deleted Items folder and then purges the contents of the dumpster. These items can then be searched for by a compliance officer if required, as the items are both indexed and discoverable. Additionally, these items will move with the mailbox if the mailbox is moved to a different mailbox database.
Why not always enable single item recovery?
1. You need to make sure you plan for the additional disk space required. See this article for more information on planning for single item recovery.
http://www.msexchange.org/articles-tutorials/exchange-server-2010/high-availability-recovery/single-item-recovery-part2.html

2. You have to enable it on each individual mailbox, you can’t set a policy that says “all mailboxes will always have it enabled.” It would be awesome if newly created mailboxes could automatically be enabled for single item recovery, but that is not how Exchange currently works.

But what if you want to move those items out of the Recoverable Items\Purges and back into the user’s mailbox?

Recovering items from this hidden Purges folder can only be performed by an Exchange or Office 365 Administrator.
There are three options for restoring items from the Purges folder. My favorite is Option 3 (MFCMAPI) because it can restore the items back to the user’s deleted items folder.

Option 1: You can use powershell
http://technet.microsoft.com/en-us/library/ff660637.aspx

Option 2: You can use the Exchange Control Panel’s eDiscovery search
Create an In-Place eDiscovery Search

or

Option 3: Use MFCMAPI

Instructions for using MFCMAPI to restore items from the Purges folder.
1. Download MFCMAPI (use this tool at your own risk!)

http://mfcmapi.codeplex.com/releases/view/97321

2. Follow the screen-shots on my older post that I have not yet migrated the pictures to this blog:

http://blogs.catapultsystems.com/IT/archive/2013/02/06/how-to-recover-missing-emails-in-office-365.aspx

Summary

While this tool is very powerful, it can also be very destructive (just like Regedit) so this author is not responsible for any damages caused by misuse of this tool. This post is for educational purposes only, use at your own risk!

References

Achieving Immutability with Exchange Online and Exchange Server 2010
History of MFCMAPI
Additional things you can do with MFCMAPI

Windows Update December 2012–KB931125 Causes issues with Lync replication

We have had customers experience a problem with replication between the Lync FE’s and the Edge services. You can check status by running this command:

get-csmanagementreplicationstore

We discovered that a MSFT patch issued in December was the culprit. (Root Certificates Optional Windows Update December 2012 – KB931125). Looks like the patch added over 300 Trusted Root CA’s to the Trusted Root List. Anything over 120 apparently stops the replication service from being successful.

Resolution:

Option 1:  Edit the registry on the Edge server to add a DWord value, SendTrustedIssuerList, to the

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL

key and assign it a value of 0.  This will prevent schannell.dll from truncating the Root CA list from the edge server, and allow validation tests to pass.

Option 2:  Open the Trusted Root CA store on the edge server.  If there are more than 120 certificates, delete unnecessary certificates until there are less than 120 certs in any of the trusted CA stores.

http://social.technet.microsoft.com/Forums/en-AU/ocsedge/thread/1cd3be72-1f65-48ae-aa8c-498f79917492

Once we added the registry key and restarted, replication began to work again

Network Load Balancing: Multicast vs Unicast

Windows Network Load Balancing (NLB) is a pretty popular (free!) solution for quickly setting up load balancers.

You must chose either Unicast or Multicast operational mode.

Unicast – Each NLB cluster node replaces its real (hard coded) MAC address with a new one (generated by the NLB software) and each node in the NLB cluster uses the same (virtual) MAC. Because of this virtual MAC being used by multiple computers, a switch is not able to learn the port for the virtual NLB cluster MAC and is forced to send the packets destined for the NLB MAC to all ports of a switch to make sure packets get to the right destination
Multicast – NLB adds a layer 2 MAC address to the NIC of each node. Each NLB cluster node basically has two MAC addresses, its real one and its NLB generated address. With multicast, you can create static entries in the switch so that it sends the packets only to members of the NLB cluster. Mapping the address to the ports being used by the NLB cluster stops all ports from being flooded. Only the mapped ports will receive the packets for the NLB cluster instead of all ports in the switch. If you don’t create the static entries, it will cause switch flooding just like in unicast.

So which one should you chose? And why?

In general, you should enable and use multicast NLB whenever possible. Use unicast mode only if your network equipment—switches and routers—don’t support multicast or if they experience significant performance issues when multicast is enabled.

Other considerations

If you don’t have administrative access to modify the configuration of the switches and routers in your environment then you may be forced to use Unicast. You might also be forced to use Unicast if your routers do not support mapping a unicast IP to a multicast MAC address.
A second network adapter is required to provide peer-to-peer communication between cluster hosts in Unicast mode. A side effect of Unicast mode is “switch flooding;” network traffic is simultaneously delivered to all cluster hosts. 

If you only have a single NIC, then it is recommended to use Multicast, but you will need to plan on adding a static ARP entry into the switches and routers on your LAN because most do not support multicast by default. This is easy to do in a small environment, but in a large routed environment, with multiple “side-car” routers then you would need to add an entry to each “side-car” router.
For example, on each Cisco router that has a “leg” into the LAN where the NLB cluster resides, enter configuration mode and enter these commands:

arp [ip] [cluster multicast mac*] ARPA
arp 192.168.1.100 03bf.c0a8.0164 ARPA
*the cluster’s multicast MAC address can be obtained from the Network Load Balancing Properties dialog box

By using the multicast method with Internet Group Membership Protocol (IGMP), you can limit switch flooding, if the switch supports IGMP snooping. IGMP snooping allows the switch to examine the contents of multicast packets and associate a port with a multicast address. Without IGMP snooping, switches might require additional configuration to tell the switch which ports to use for the multicast traffic. Otherwise, switch flooding occurs, as with the unicast method. Here are the commands to tell the switch which ports to use for multicast traffic:

mac-address-table static [cluster multicast mac] [vlan id] [interface]
mac-address-table static 03bf.c0a8.0164 vlan 1 interface GigabitEthernet1/1 GigabitEthernet1/2

[Update 2/16/2013]

Note: the highlighted section above is fantastic when your environment is fairly static, however, if your NLB hosts are virtual machines, and if those virtual machines reside in an NLB cluster, then the above hardcoded configuration will prevent the NLB nodes from converging when the virtual machines are migrated to other hypervisors in the cluster. This is because the other hypervisor nodes are connected to the switch via separate ports than those hardcoded above. And you can’t hardcode all the hypervisor nodes in advance because the VM’s don’t reside on all nodes at the same time. Therefore, I recommend avoiding using the mac-address-table command and relying only on the arp command.

References

http://technet.microsoft.com/en-us/library/bb742455.aspx

http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&docType=kc&docTypeID=DT_KB_1_1&externalId=1006525

20 ways to send large files over the Internet

When you need to send large files, email is often the most restrictive transport. By default on-premise Exchange 2010 limits emails to just 10 megabytes, and Office 365 offers 25megabytes per email message. While you can increase your on-premise email server size limit, you cannot increase the size limit in Office 365 beyond 25megabytes. Even if you could increase it beyond 25megabytes, if your recipients are outside your organization, their email server may limit the size of email to 10 megabytes. For end-users, this is a frustrating experience, and it results in helpdesk requests like “why am I getting this bounce-back message when I email so and so.”
Many IT departments provide users with either FTP sites or SharePoint extranet sites to share large files with external users. However, those solutions require IT overhead to maintain.

There are now many free or low cost solutions for sharing files including:
1. Adobe SendNow – at $20/year this seems to be very reasonable. The Outlook plug-in does not yet work with Office 2013.
2. Box.com – offers a free account. The business account ($15/month) includes a pretty amazing outlook plug-in.
3. YouSendit – Free accounts can send up to 100MB. Paid plans start at $10/month or $100/year and the size per email increases to 2GB.  The biggest plus is the outlook plug-in because it will automatically detect when attachments exceed a pre-determined size, ex: 10mb or 35mb. I verified that the Outlook plug-in works with Office 2013.

Once I signed up for a free account with YouSendIT, I downloaded the free Outlook Plug-In.

YouSendit’s Outlook plug-in offers a single-sign in option to integrate with Active Directory.

While there are other cloud storage providers, most are consumer oriented and do not natively integrate with Microsoft Outlook.

Xobni is a 3rd party tool that allows Outlook to to send files from DropBox or SkyDrive.
Likewise, Harmon.ie has an Outlook 2007/2010 plug-in that converts large attachments to links on Google Drive. The plug-in does not support Office 2013.

For quick ad/hoc file transfers, check out 7. DropSend and 8. WeTransfer.com. Within seconds of visiting their websites you can transfer large files (up to 2 Gigabytes! for free).

Here are the other 12 sites that offer file sharing services: Egnyte, SendThisFile (offers Outlook plug-in), Send6, MediaMax, MailBigFile, SendSpace, MegaUpload, zUpload, MyOtherDrive, DivShare, TransferBigFiles and MediaFire