RSS

Monthly Archives: October 2011

Database Space issue error codes

If a database runs out of disk space, an 1101 or 1105 error is raised. So long as the database
is out of space, you cannot insert any new data. You can increase the space available to a
database by adding a fi le to the appropriate fi legroup on a disk volume that has additional
space available.

If tempdb runs out of space, you can affect every database on an instance. In severe cases,
all your applications running against an instance could cease to function. In addition to the
1101, 1105, and 9002 errors, tempdb will raise 3958, 3959, 3966, and 3967 errors when space
issues for the version store are encountered.

Lesson Summary
􀁑 The SQL Server error log contains confi guration information upon instance startup,
errors, stack dumps, and informational messages about your instance.
􀁑 The Windows Application Event Log contains service start/stop messages, major event
informational messages, errors, and anything from a RAISERROR command that uses
either the WITH LOG parameter or specifi es a severity level of 16 or higher.
􀁑 The Log File Viewer allows you to view error and event logs combined into a single list
in chronological order. The Log File Viewer also allows you to fi lter and search logs.
􀁑 An 1101 or 1105 error occurs when a database runs out of space.
􀁑 A 9002 error occurs when a transaction log is full.
􀁑 When the version store encounters space issues, you could receive 3958, 3959, 3966,
and 3967 errors.

Source: MCTS Training Kit – Mike Hotek

 
Leave a comment

Posted by on October 19, 2011 in SQL Server

 

Database Integrity Checks DBCC

Database Integrity Checks
As you learned in Lesson 2, databases have an option called PAGE_VERIFY. The page
verifi cation can be set to either TORN_PAGE_DETECTION or CHECKSUM. The PAGE_VERIFY
TORN_PAGE_DETECTION option exists for backwards compatibility and should not be used.
When the PAGE_VERIFY CHECKSUM option is enabled, SQL Server calculates a checksum for
the page prior to the write. Each time a page is read off disk, a checksum is recalculated and
compared to the checksum written to the page. If the checksums do not match, the page has
been corrupted.
When SQL Server encounters a corrupt page, an error is thrown, the command attempting
to access the corrupt page is aborted, and an entry is written into the suspect_pages table in
the msdb database.
BEST PRACTICES PAGE VERIFICATION
Although page verifi cation can detect and log corrupted pages, the page must be read
off disk to trigger the verifi cation check. Data is normally read off disk when users and
applications access data, but instead of having a user receive an error message, it is much
better for you to proactively fi nd corruption and fi x the problem by using a backup before
the user has a process aborted.You can force SQL Server to read every page from disk and check the integrity by executing
the DBCC CHECKDB command. The generic syntax of DBCC CHECKDB is:
DBCC CHECKDB [( ‘database_name’ | database_id | 0
[ , NOINDEX | { REPAIR_ALLOW_DATA_LOSS | REPAIR_FAST
| REPAIR_REBUILD } ] )]
[ WITH {[ ALL_ERRORMSGS ] [ , [ NO_INFOMSGS ] ] [ , [ TABLOCK ] ]
[ , [ ESTIMATEONLY ] ] [ , [ PHYSICAL_ONLY ] ] | [ , [ DATA_PURITY ] ] } ]
When DBCC CHECKDB is executed, SQL Server performs all the following actions:
􀁑 Checks page allocation within the database
􀁑 Checks the structural integrity of all tables and indexed views
􀁑 Calculates a checksum for every data and index page to compare against the stored
checksum
􀁑 Validates the contents of every indexed view
􀁑 Checks the database catalog
􀁑 Validates Service Broker data within the database
To accomplish these checks, DBCC CHECKDB executes the following commands:
􀁑 DBCC CHECKALLOC, to check the page allocation of the database
􀁑 DBCC CHECKCATALOG, to check the database catalog
􀁑 DBCC CHECKTABLE, for each table and view in the database to check the structural
integrity
Any errors encountered are output so that you can fi x the problems. If an integrity error is
found in an index, you should drop and re-create the index. If an integrity error is found in a
table, you need to use your most recent backups to repair the damaged pages.

 

SOURCe: MCTS Training kit -mike hotek

 
Leave a comment

Posted by on October 19, 2011 in SQL Server

 

Filtered Index

An index key could have a signifi cant skew in the data values where a large percentage of the
table contains duplicated values only within a narrow range of the overall set of values. If a
query were executed that retrieved data from the portion of the table that was highly selective,
it is likely that subsequent queries executed against the low selectivity range would use the
same index, but doing so is clearly inappropriate.

To handle the cases where signifi cant skew exists in the data, SQL Server 2008 allows you
to create fi ltered indexes. A filtered index is simply an index with a WHERE clause. Only the
index keys matching the WHERE clause are added to the index, allowing you to build indexes
that focus on the highly selective portions of the table while allowing SQL Server to choose
another method for the less selective range.

Filtered indexes have the following restrictions:
􀁑 They must be a nonclustered index.
􀁑 They cannot be created on computed columns.
􀁑 Columns cannot undergo implicit or explicit data type conversion

 

Source: MCTS 2008 Training Kit – Mike Hotek

 
 

Covering Index

When an index is built, every value in the index key is loaded into the index. In effect, each index is a mini-table containing all the values corresponding to just the columns in the index key. Therefore, it si possible for a query to be entirely satsfied by using the data in the index. An index that is constructed such that SQL Server can completely satisfy queries by reading only the index is called a covering index.

If you can construct covering indexes for frequently accessed data, you can increase the response time for queries by avoiding additional reads from the underyling table. You can also potentially increase concurrency by havning queries accessing the data form an index while changes that do not write to the index are being made to the underlying table

SQL Server is also capable of using more than one index for a given query. If two indexes have atleast one column in common, SQL server can join the two indexes to satisfy a query.

 

Source: MCTS 2008 Training kit – Mike Hotek

 
Leave a comment

Posted by on October 17, 2011 in SQL Server

 

Index

Index Levels
The number of levels in an index and the number of pages within each level of an index are
determined by simple mathematics. A data page in SQL Server is 8,192 bytes in size, which
can be used to store up to 8,060 bytes of actual user data. Based on the number of bytes
required to store an index key, determined by the data type, you can calculate the number of
rows per page that are stored by using simple division.
The following example describes not only how an index is built, but also the size calculations
for an index. It gives you an idea of how valuable it can be to use an index to fi nd data within
very large tables, as well as explain why the amount of time to fi nd a piece of data does not
vary much even if the size of a database increases dramatically. Of course, the amount of time
needed to locate data also depends upon writing effi cient queries.
If you build an index on an INT column, each row in the table will require 4 bytes of
storage in the index.

If the table contains only 1,200 rows of data, you need 4,800 bytes of storage. Because all
the entries would fi t on a single page of data, the index would have a single page that would
be the root page as well as the leaf page. In fact, you could store 2,015 rows in the table and
still allocate only a single page to the index.
As soon as you add the 2,016th row, however, all the entries can no longer fi t on a single
page, so two additional pages are allocated to the index in a process called page splitting.
The existing root page is pushed down the structure to become a leaf-level page. SQL Server
takes half of the data on the index page and moves it to one of the newly allocated pages.
The other new page is allocated at the top of the index structure to become the new root
page. The fi nal step in the process is to take the fi rst entry on each of the leaf-level pages
and write the entries to the newly created root page. You are now left with an index with a
root page and two leaf-level pages. This index does not need an intermediate level created
because the root page can contain all the values at the beginning of the leaf-level pages. At
this point, locating any row in the table requires scanning exactly two pages in the index.
NOTE PAGE SPLITS
Keep in mind that rows on an index page are maintained in sorted order, so SQL Server
You can continue to add rows to the table without affecting the number of levels in
the index until you reach 4,060,225 rows. You then have 2,015 leaf-level pages with 2,015
entries apiece. The root page has 2,015 entries corresponding to the fi rst row on each of the
leaf-level pages. Therefore, for SQL Server to fi nd any row within the 4,060,255 rows in the
table, it would require reading exactly two pages. When the 4,060,226th row of data is added
to the table, another page needs to be allocated to the index at the leaf level, but the root
page cannot hold 2,016 entries because that would make it exceed the 8,060 bytes that
are allowed. So SQL Server goes through a page split process. The previous root-level page
now becomes an intermediate-level page, with a second page allocated at the intermediate
level. The former root page undergoes a page split to move half of the entries to the newly
allocated intermediate-level page, and the fi rst entry on each of the two intermediate-level
pages is written to the newly allocated root page.
The next time SQL Server needs to introduce an intermediate level occurs when it must add
the 8,181,353,376th row of data to the table—2,015 rows on the root page corresponding to
2,015 pages on the intermediate level, each of which has 2,015 entries corresponding to 2,015
pages at the leaf level, plus one extra row of data that will not fi t.
As you can see, this type of structure enables SQL Server to locate rows in extremely large
tables very quickly. In this example, fi nding a row in the table with a little over 4 million rows
requires SQL Server to scan only two pages of data, and the table could grow to more than
8 billion rows before it would require SQL Server to read three pages to fi nd any row.

 

Source: Microsoft 2008 SQL Server 2008 – Implementation and Maintenance – Mike Hotek

 
 

Data Partitioning

Use sp_helpFile and sp_helpFileGroup to see the allocations.

Index partitioning
If table and index use the same partitioning function and partitioning columns (in same order), they are said to be aligned.
If table and index use the same partitioning function and partitioning scheme, they are said to be storage aligned. In general, having the tables and indexes on the same file or filegroup can often benefit where multiple CPUs can parallelize an operation across partitions. In the case of storage alignment and multiple CPUs, SQL Server can have each processor work directly on a specific file or filegroup – and know that there are no conflicts in data access because all data required for a particular join or index lookup will be colocated on the same disk.

LEFT and RIGHT parititions
For a LEFT partition function, the extreme LEFT boundary point is fixed. For a RIGHT partition function, the extreme RIGHT boundary point is fixed. The other ends of the partitions will not have boundary points explicitly  defined (ie, the extreme right will be ∞ for a left partition function, and the extreme left will be -∞ for a right partition function).

To easily remember the rules of LEFT and RIGHT partition functions, picture a number line and use just one boundary condition as an example.

In a LEFT partition function, any value that is on the LEFT of a boundary condition, is always <= the boundary condition, and any value that is on the right of the boundary condition is always > than the boundary condition. (Remember, that in the left partition function, the extreme right is open (∞), so towards right is always > and going towards infinity).


For e.g. if the first value (or boundary condition) of a partition function is ‘20001001’ then the values within the bordering partitions will be:

1st partition is all data <= ‘20001001’
2nd partition is all data >  ‘20001001’

In a RIGHT partition function, any value that is on the RIGHT of a boundary condition, is always >= the boundary condition. Any value on the left of the boundary condition is always < than the boundary condition. (Remember, in the RIGHT partition function, the extreme left is open (-∞), so towards left is always < and going towards -∞.

For e.g. if the first value (or boundary condition) of a partition function is ‘20001001’ then the values within the bordering partitions will be:
1st partition is all data < ‘20001001’
2nd partition is all data => ‘20001001’

Process of partitioning a table, index, or indexed view is as follows:

1. Create a partition function.
2. Create a partition scheme mapped to a partition function
3. Create the table, index, or indexed view on the partition scheme.

PARTITION FUNCTION
A partition function defines the set of boundary points for which data will be partitioned.
e.g.

CREATE PARTITION FUNCTION myPartFunction (int)
AS RANGE LEFT
FOR VALUES (10,20,30,40,50)

PARTITION SCHEME
A partition scheme defines the storage structures and collection of filegroups that you want to use with a given partition function.
E,g,

CREATE PARTITION SCHEME myPartScheme
AS PARTITION myPartFunction
TO (fileGroup1, fileGroup2, fileGroup3, fileGroup4, fileGroup5

OR

CREATE PARTITION SCHEME myPartScheme
AS PARITION myPartFunction
ALL TO (fileGroup1)

Important: The filegroup must already be present in the database. SQL Server allows the use of the ALL keyword which allows you to create all paritions defined by the partition function within a single filegroup. If you do not use the “all” keyword, then the partition scheme must be defined in such a way as to contain a filegroup for each partition that is created by the partition function mapped to the partition scheme. ie, Each partition that is created in the partition function, must be associated with atleast one file group. If you use the “all” keyword, a maximum of only one filegroup can be specified.

CREATING PARTITIONED TABLE OR PARTITIONED INDEX
This is similar to creating a non partitioned table, index or indexed view. The ON clause is where the difference lies. Instead of specifying a filegroup on which to create the table, you specify a partition scheme.

CREATE TABLE Employee
(EmployeeID INT, Name varchar(100))
ON mypartscheme (employeeID)

You need to specify the column in the table, the partitioning key, to which the partition function will be applied. The partitioning key must match the data type, length and precision of the partition function. If the partition column is computed, the computed column must be PERSISTED (values must be physically stored)

Partitioned index is similar

CREATE NONCLUSTERED INDEX idx_employeeName ON dbo.Employee(name)
ON myPartScheme(EmployeeID).
When specifying partitioning key for an index, you are not limited to the columns that on which the index is defined.

SPLIT AND MERGE OPERATORS
The SPLIT operator introduces a new boundary point into a partition function. MERGE eliminates a boundary point from a partition function. You must be careful while using this because you are either adding or removing an entire partiton form the partition function. Data is not being removed – only partition. Because a partition can reside only in a single filegroup, these operations could cause significant amount of disk I/O as sql server relocates rows on disk.

ALTER PARTITION FUNCTION partition_function_name
{SPLIT RANGE (boundary_value)
| MERGE RANGE (boundary_value)} [;]

ALTERING A PARTITION SCHEME
You can add filegropus to an existing partition scheme to create more storage space for a partitioned table.

ALTER PARTITION SCHEME partition_Scheme_name
NEXT USED [file_group_name] [;]

The NEXT USED clause has two purposes
1. It adds a new filegroup to the partition scheme, if the specified filegroup is not already part of the partition scheme.
2. It marks the NEXT USED property for a filegroup. The filegroup that is marked with the NEXT USED flag is the filegroup that contains the next partition that is created when a SPLIT operation is executed.

Note: Partitioning can be done after it has been populated with data – although this causes significant disk IO. To partition an existing table, you need to drop the clustered index and re-create the clustered index on the partition scheme. To partition an existing index or indexed view, drop the index and recreate the index on a partition scheme.

SWITCH OPERATOR
The basic idea is that SWITCH allows you to exchange partitions between tables in a perfectly scalable manner with no locking, blocking or deadlocking. SWITCH has severala requirements to ensure that the operation is perfectly scalabale. Th emost important are as follows

  • The data and index for the source and target tables must be aligned.
  • Source and target tables must have the same structure.
  • Data cannot be moved from one file group to another
  • Two partitions with data  cannot be exchanged
  • The target partition must be empty
  • The source or target table cannot participate in replication.
  • The source or target tables cannot have full text indexes or FILESTREAM data type defined..

Points to Remember:

1. You cannot use columns which are of text, ntext, image, sml, varbinary(max), varchar(max) or any CLR data types for partitioning. Any column that are used for partitioning should be deterministic.
2. Maximum number of partitions for a table is 1000.
3. There fore you are allowed to specify a maximum of 999 boundary points.
4. A computed column must be PERSISTED
5. SPLIT is used to introduce a new boundary point to a partition function
6. MERGE is used to remove a boundary point from a partition function
7. SWITCH is used to move partition between tables.
8.In a SWITCH operation, You cannot move data from one filegroup to another , nor exchange two partitions with data – ie, only one partition may contain data.

List out which tables are partitioned, and what partition scheme and partition function they use:

SELECT DISTINCT object_name(o.object_id), pf.name AS function_name, ps.name AS scheme_name
FROM
sys.objects o
INNER JOIN
sys.indexes i ON o.object_id = i.object_id
INNER JOIN
sys.partition_schemes ps ON ps.data_space_id = i.data_space_id
INNER JOIN
sys.partition_functions pf ON     ps.function_id = pf.function_id
WHERE i.index_id < 2

LIST which column the table is being partitioned on

SELECT b.name
FROM
sys.index_columns a
inner join
sys.columns b on
a.column_id= b.column_id
WHERE
OBJECT_NAME(a.OBJECT_ID)='<tableName>’ and
a.partition_ordinal = 1 and OBJECT_NAME(b.OBJECT_ID)='<tableName>’;
*If partition_ordinal = 0, it is not a partitioned column

 

SOURCE: MCTS Training kit (chapter 6)

http://www.sqlskills.com/resources/whitepapers/partitioning%20in%20sql%20server%202005%20beta%20ii.htm#_Toc79339948

 
 

C2 and Common Criteria Auditing

C2 auditing is a U.S. department of defese audit specification that can be enabled by

sp_Configure ‘c2 audit mode’, 1

C2 auditing has been superseded by the Common Criteria specification developed by the European Union. However, with respect to sQL server, If you are complying with either C2, or  common criteria, the audit result is essential the same. ie, you need to audit EVERY successful and unsuccessful attempt to access a database object.

When C2 auditing is enabled, an audit log file is written to the default data directory with a rollover size of 200 megabytes. SQL Server continues to generate rollover files until you run out of disk space, thereby causing the istance to shut down. With C2 auditing enabled, the audit records are required to be written. If the system is too busy, user requests are aborted to free up resources to write the audit trail

Although Common Criteria superseded C2 in the security world, it doesn’t subsume all the C2 audit-mode–option functionality. Common Criteria Compliance includes the following features:

  • Residual Information Protection (RIP.2) implementation
  • login statistics displayed in sys.dm_exec_sessions dynamic management view
  • Table DENY to override column GRANT
 
Leave a comment

Posted by on October 15, 2011 in SQL Server

 

Ownership Chains

Each object within a database has an owner associated to it – the schema owner. You can also build objects that reference other objects within a database, such as a stored proc that calls functions which issue SELECT statements against views that are based on tables. the owner of each object that is referenced in a calling stack forms an ownership chain as the code transits from one object to the next within the calling Stack. So long as the owner of the object ad any other objects that it references have the same owner, you have an intact ownership chain. SQL Server checks your permissions on an object at the top of the calling stack, as well as each time the object owner changes within a calling stack.

 

Source : MCTS self paced training kit. page275

 
Leave a comment

Posted by on October 14, 2011 in Security, SQL Server

 

Installation

User Accounts to be used for each of the services in SQL server can be one of the following types.

  • Domain account
    • Used if service will be accessing files from another server
    • Used if serice has to utilize network services
  • Local User account
    • Used if service will not access any of the servers
    • Used if service is not a member of the domain
  • Built-in System account
    • Used if Service needs predefined rights on the local server or network

FILESTREAM – This is the final tab  for database engine configuration and is a new feature in sql server 2008. This feature allowsbinary large objects (BLOBs) to be stored as files within the file system. During installation, this tab allows you to enable or disable FILESTREAM for TSQL access. If enabled, u can insert , update, delete statments on FILESTREAM data via TSQL.

 
Leave a comment

Posted by on October 10, 2011 in SQL Server

 

Logon Trigger

  • Logon triggers fire stored procedures in response to a LOGON event.
  • This event is raised when a user session is established with an instance of SQL Server. Logon triggers fire after the authentication phase of logging in finishes, but before the user session is actually established.
  •  Therefore, all messages originating inside the trigger that would typically reach the user, such as error messages and messages from the PRINT statement, are diverted to the SQL Server error log. Logon triggers do not fire if authentication fails.
  • You can use logon triggers to audit and control server sessions, such as by tracking login activity, restricting logins to SQL Server, or limiting the number of sessions for a specific login.
  • Note that the LOGON event corresponds to the AUDIT_LOGIN SQL Trace event, which can be used in event notifications.
  • The primary difference between triggers and event notifications is that triggers are raised synchronously with events, whereas event notifications are asynchronous. This means, for example, that if you want to stop a session from being established, you must use a logon trigger. An event notification on an AUDIT_LOGIN event cannot be used for this purpose.
 
Leave a comment

Posted by on October 8, 2011 in Security, SQL Server