It’s been said many times: SharePoint is a complex platform. When Microsoft released SharePoint 2010, management complexity increased further with the addition of service applications, more flexible SharePoint deployment models, new features like Office Web Apps and expanded data-handling capabilities (for example, increased list item limits and remote storage of binary large objects, or blobs). As a result, systems administrators need to be even more vigilant in maintaining their SharePoint farms to ensure acceptable performance.
One of the biggest performance drains in any SharePoint 2010 management environment is having inadequate hardware. And because of all the changes in the deployment models and the new features, hardware requirements for SharePoint 2010 have increased or changed significantly compared with earlier versions of the software.
Find out more about SharePoint 2010 deployment, adoption and management and how the platform works in successful enterprise collaboration
At the core of SharePoint is SQL Server. Since SharePoint is very dependent on SQL Server to perform most operations, it’s important that the systems running SQL Server have enough processing power and random-access memory (RAM). Also, while Microsoft officially supports virtualizing SharePoint implementations, some SQL Server implementations do not qualify as good candidates for virtualization. And even under the best of circumstances, virtualized instances of SQL Server don’t perform as well as physical ones. Before you start virtualizing your systems for SharePoint 2010 management, make sure your environment can still deliver appropriate performance levels. There is much to be learned about the pros and cons of virtualizing SQL Server and its impact on performance.
Out of service?
Beyond SQL Server, SharePoint 2010’s new service application architecture needs to be taken into account. It enables administrators to create proper application server environments, which are responsible for hosting all of the “shared services” (not to be confused with SharePoint 2007’s Shared Services concept) for one or more SharePoint farms. Service applications supporting functions such as search, user-profile imports, bulk document conversion and managed metadata can all be installed and run independently of the larger SharePoint farm.
However, different service applications have different requirements. For example, Visio Services uses a delegated Windows identity to access data, potentially outside of the domain where SharePoint is installed. As a result, you also need to set up the Secure Store Service to store external credentials used by Visio. Though that isn’t necessarily processing-intensive, failure to properly configure the credentialing service can result in failed authentication attempts, poor performance and unhappy users.
To cite another example, the Search Service is very processor- and RAM-intensive while the crawling and indexing process is running. If you have lots of content sources or large volumes of data, it could consume a good portion of the processing capacity on a server and cause performance degradation for other services running on the same hardware.
While SQL Server and the service applications require a lot of attention, performance can also be greatly affected by the front-end Web servers that typically are part of SharePoint farms. With SharePoint 2010, the minimum amount of RAM for both standalone servers and systems in a farm went up to 8 GB. However, administrators will find that SharePoint 2010 runs best with 16 GB to 24 GB. Along the same lines, it’s important to ensure that there’s sufficient processing capacity, meaning at least a dual- or quad-core processor, though dual multicore processors are ideal.
Running in sand
As part of SharePoint 2010, Microsoft also introduced the idea of “sandboxed solutions.” These are packages of features, definitions and other functionality that can be deployed by “site collection” managers or some individual end users without the involvement of a SharePoint farm administrator (though the correct permissions are necessary); each site collection can contain its own set of sandboxed solutions.
By default, SharePoint assigns “quota points” to each site collection on a server. As the sandboxed solutions operate, they consume quota points based on their usage of memory, processor cycles and specific calls to the SharePoint application programming interface. If one runs poorly, it can be shut down. However, one errant solution can cause all of the others within a site collection to be shut down. Therefore, farm administrators should leverage controls within SharePoint’s Central Administration management suite to set up appropriate quota templates and block rogue solutions when necessary. And at the site-collection level, administrators can monitor for performance and quota consumption.
While SharePoint 2010 adds management complexity, it also offers some help for dealing with that complexity. One of the big improvements Microsoft made in SharePoint 2010 was to build more intelligence into Central Administration. A new management feature called Health Analyzer alerts administrators to potential problems in SharePoint farms. Health Analyzer can detect, for example, if a specific service hasn’t been properly configured or if disk space is running out. Alerts are placed in a Health Reports list, and SharePoint makes note of them with a colored banner on the Central Administration home page. A separate list of rules governs what Health Analyzer monitors. Each of the rules can be modified to suit specific needs; they can also be disabled if necessary.
This article really just touches on the basics. However, many performance issues that frustrate end users and SharePoint administrators alike can be traced to poor hardware sizing, improper service application planning or out-of-control custom applications. Beyond these items, it’s also worth considering and evaluating the individual workloads being run on a SharePoint farm. For example, poor collaboration performance could be caused simply by insufficient bandwidth between client systems and servers, or by a misconfigured router. Since performance is dependent on a number of moving parts, be sure to consider the whole picture.
Diagnosing a SharePoint performance issue
Having trouble tracking down the cause of a performance problem in your SharePoint farm? It’s not always easy to figure out where one might have originated. Follow these steps to find the culprit:
- Ensure that the hardware is not overtaxed or undersized (especially your SQL Server systems).
- Open Central Administration and make sure that Health Analyzer hasn’t identified an issue with a service or farm configuration – insufficient disk space, for example.
- Are there services running on SharePoint servers that might impact performance, for example, backup routines or other applications?
- Look in both the SharePoint logs (in the SharePoint root in the LOGS directory) and Windows Event logs for latent problems, such as communications issues or authentication failures between servers or services.
- Are there errant sandboxed solutions or customer add-ons that may be consuming abnormally high resources? If so, disable them or reduce their resource quotas.
- Investigate the connection between clients and the server. Are all of your client workstations experiencing a performance problem, or just specific clients? Are there common attributes on the machines experiencing problems?
- Turn on the Developer Dashboard, a new feature that’s off by default; it allows you to see the processing times for different operations.
ABOUT THE AUTHOR
Shawn Shell is the founder of Consejo Inc., a consultancy based in Chicago that specializes in Web-based applications, employee and partner portals, and enterprise content management.