Sunday, April 20, 2014

SQL Server Interview Questions and Answers


SQL Server 2016 New Features!

Improvements on AlwaysON Availability Groups:
– Standard Edition will come with AGs support with one db per group synchronous or asynchronous, not readable (HA/DR only).
– Improved log throughput.
– 3 sync replicas, up from 2 in SQL 2014.
– Listener will be able to do round-robin load balancing of read-only requests on multiple secondaries.
– Better policies for failover, for example, looking at individual database health.
– Microsoft DTC support.



Improvements on Column store Indexes:

– Parallel insert.
– Nonclustered B-tree indexes on top of the columnstore.
– Read/write nonclustered columnstore on top of a regular clustered index.
– In-memory columnstore on in-memory oltp table.
– More T-SQL constructs run in batch mode.
– More predicates and aggregates pushed down to the storage engine.
– REORGANIZE will remove deleted rows and merge small rowgroups, online.
– New DMVs, XEvents and Perfmon counters.
– Support for RCSI or Snapshot isolation.
– Fully readable on an Availability Groups secondary.


Improvements on In-Memory OLTP
– In-memory OLTP will support Foreign Keys between in-memory tables.
– Size limit increased from 256GB to 2TB.
– Storage file garbage collector tuned to be faster by decoupling the GC from Filestream.
– Transparent Database Encryption (TDE) is supported now for in-memory tables.
– Check and Unique constraints added as well.
– Multi-threaded checkpoint for in-memory (one per container).
– ALTER procedure and sp_recompile supported (fully online).
– ALTER TABLE for schema and index changes (offline). Requires 2x memory.
– Parallel plans are now possible on in-memory tables.
– Native compilation support for: outer joins, OR, NOT, UNION (ALL),DISTINCT, Subqueries (IN, NOT IN, EXISTS, NOT EXISTS).
– Natively compiled stored procedures can be nested.
– Ability to create natively compiled UDFs.
– Trigger support added to in-memory tables.
– Improvements on the migration wizard.
– Wide range of collations supported now.
– LOB support being worked on to add for this release.
Improvements on DBA features:
– Query store: this is a new capability that allows tracking changes of execution plans, comparing different plans and easily telling the optimizer what plan to use.
– Online ALTER COLUMN.
– Truncate at Partition level.
– Instance will have better handling of multiple Temdb files. So far the material on this is not 100% clear on whether this just means better default for tempdb, some kind of screen on the setup that recommends the amount of files or if SQL Server will be able to automatically add Tempdb files if there’s contention. We’ll have to wait and see.


Improvements on Azure integrations
– Snapshot backup to Azure storage when running as a VM on Azure and with files directly on blob storage.
– Stretch a SQL Server table to Azure. This will allow to have a table that expands from on premises SQL Server to Azure (for example, keep your archive in the cloud and hot data on premises).

Improvements on Dev features:
– Native JSON support.
– Temporal database: transparent and built-in functionality to track historical changes in the data (for example, an address or name change).
– SQL Dev integrated back into one experience: Sql Server Data Tools for Visual Studio 2015 will have both relational and BI database projects.



Improvements on Security:
– AlwaysEncrypted: columns are encrypted using keys on the application driver side so the data on the database is always encrypted and transparent for the application users.
– Dynamic Data Masking: already on Azure SQL DB V12, this allows establishing policies to mask sensitive column values to specific users or roles.
– Row Level security: also available on Azure SQL DB V12, this allows establishing policies that filter our specific rows based on the user trying to access the data.
Integration with Hadoop
– Polybase included in SQL Server (previously PDW/APS only) to query HDFS with T-SQL and integrate with SQL Server.
– You’ll be able to stand up a cluster of SQL Server 2016 instances, one head node and one or more compute nodes and do parallel processing with Hadoop through Polybase.
– Note this is NOT “building your own PDW”, the SQL Server compute nodes don’t have local SQL Server data that can be referenced.


Improvements on BI – Datazen is available now for any SQL 2008+ Enterprise + SA license holder, should be further integrated with the BI stack by release time.
– SSRS is getting a makeover to have a modern look, mobile experience,all browser compatibility, paramater layout customization, new visualizations.
– R modules will be able to execute in SQL Server, moving the analytics code close to the data.
– SSIS designer support for previous versions.
– Power Query support in SSIS.
– Power BI Q&A through the SSAS on-premises data connector.
– Pin SSRS reports to PowerBI dashboards.
– Many to Many relationships on SSAS Tabular.
– Built-in time intelligence on SSAS Tabular models, new DAX functions, parallel partition processing.
– New scripting language for Tabular model projects.
– Netezza as a native data source for SSAS.
– DBCC instruction support for multidimensional for maintenance related tasks.
– Tight integration and improvements in relation to Sharepoint vNext


Performance Optimization:
Schema:
Define primary keys and foreign key relationships, unique constraints and check constraints.
Use indexed views for denormalization.
Partition tables vertically and horizontally.

Queries:
Use temporary tables and table variables appropriately.
Avoid expensive operators such as NOT LIKE.
Avoid explicit or implicit functions in WHERE clauses.
Use locking and isolation level hints to minimize locking.

Index:
1.Create indexes based on use.
2.Consider range data for clustered indexes.
3.Create an index on all foreign keys.
4.Consider indexes on columns used in WHERE, ORDER BY, GROUP BY and DISTINCT clauses.
5.Use a covering index to reduce the query execution time of frequently used statements. 
A covering index is a nonclustered index that has all the columns used in a WHERE clause and in the query column selection.
Use the Index Tuning Wizard to select the correct indexes to build.

Seeks typically happen when a query is "covered" by an index, which means the seek predicates are in the index key and the displayed columns are either in the key or included. 
A scan happens when the SQL Server Query Optimizer determines that the best way to find the data is to scan the entire index and then filter the results.

Transactions:
Avoid long-running transactions.
Use isolation level hints to minimize locking.
Ensure that explicit transactions commit or roll back.

Stored Procedure:
Use Set NOCOUNT ON in stored procedures.
Stops the message that shows the count of the number of rows affected by a T-SQL statement or stored procedure from being returned as part of the result set.

Execution Plans:
Avoid table scan and index scan.
Evaluate hash joins.
Evaluate sorts and filters.

Tuning:
Use SQL Profiler to identify long-running queries and tune it.
Use SQL Profiler to monitor table and index scans.

Normalization:
1 NF: A database table is said to be in 1NF if it contains no repeating fields/columns.
2 NF: A database table is said to be in 2NF if it is in 1NF and contains only those fields/columns that are functionally dependent on the primary key.
In 2NF we remove the partial dependencies of any non-key field.
3 NF: A database table is said to be in 3NF if it is in 2NF and all non keys fields should be dependent on primary key.
BCNF:A database table is said to be in BCNF if it is in 3NF and contains each and every determinant as a candidate key.
4 NF: A database table is said to be in 4NF if it is in BCNF and primary key has one-to-one relationship to all non keys fields.
5NF: A database table is said to be in 5NF if it is in 4NF and contains no redundant values 

ACID properties:
Atomicity: An atomic transaction is either fully completed or is not begun at all.
Consistency: Guarantees Committed Transaction State.
Isolation: Keeps transactions seperated from each other until they are finished.
Durability: Committed Data is never lot.

Retrieve Last Inserted Identity of Record:
Returns the last identity value inserted into an Identity column in the same scope.
SELECT @@IDENTITY
SELECT SCOPE_IDENTITY()

SCOPE_IDENTITY and @@IDENTITY return the last identity values that are generated in any table in the current session. However, SCOPE_IDENTITY returns values inserted only within the current scope; @@IDENTITY is not limited to a specific scope.

SELECT IDENT_CURRENT(‘tablename’): IDENT_CURRENT is not limited by scope and session; it is limited to a specified table. IDENT_CURRENT returns the value generated for a specific table in any session and any scope.

 DBCC commands: 
DBCC CHECKIDENT(Customer, RESEED, 0): Resets the identity column value to 0.
DBCC CHECKDB
DBCC CHECKTABLE
DBCC CHECKCATALOG
DBCC CHECKALLOC
DBCC SHOWCONTIG
DBCC SHRINKDATABASE
DBCC SHRINKFILE etc

COUNT_BIG vs COUNT: 
Returns the number of items in a group.
COUNT_BIG always returns a BIG INT data type value.
COUNT always returns an INT data type value.

COUNT_BIG(*) returns the number of items in a group including NULL values and duplicates. 
COUNT_BIG (ALL expression): returns the number of non-null values. COUNT_BIG (DISTINCT expression): returns the number of unique non null values.

View:
A View is a stored query, which can contain multiple columns from one or more tables. A view is updatable if it contains only one base table. A view cannot be updatable, if it contains multiple base tables. 
We can use 'Instead of Trigger' to update a view. If the view is a partitioned view, the view is updatable.

Syntax to Create a View:
Create view ViewName 
With Encryption
With SchemaBinding AS
Select * from Employee 

3 types of Views: 
Normal Views 
Indexed Views 
Partitioned Views

Indexed View:
Creating a unique clustered index on a view changes it to an indexed view. Indexed views work best for queries that aggregate many rows.

CREATE TABLE MyBigTable
(
ItemID INT PRIMARY KEY,
ItemDsc VARCHAR(20), 
QTY INT
)
 GO

CREATE VIEW MyView
WITH SCHEMABINDING AS
SELECT 
     ItemID, QTY
FROM dbo.MyBigTable
WHERE QTY > 10
GO

CREATE UNIQUE CLUSTERED INDEX idx_MyView ON MyView(QTY)

A view that is to be indexed has to be created with schema binding. This means that once the indexed view is created, the underlying tables cannot be altered in any way that would materially affect the indexed view unless the view is first altered or dropped. It also means that all the tables referenced in the view must be referenced by their two-part name (Schemaname.Tablename).

Conditions for indexed view:
1. The ANSI_NULLS and QUOTED_IDENTIFIER options must have been set to ON when the CREATE VIEW statement was executed.
2. The view must be created with the SCHEMABINDING option. Schema binding binds the view to the schema of the underlying base tables. 
3. Any user-defined functions referenced by the view must have been created using WITH SCHEMABINDING.

ANSI NULL ON/OFF:
When it is ON, any query that compares a value with a null returns a 0. 
When it is OFF, any query that compares a value with a null returns a null value.

QUOTED IDENTIFIER ON/OFF:
This options specifies the setting for usage of double quotation. 
When this is on, double quotation mark is used as part of the SQL Server identifier (object name). This can be useful in situations in which identifiers are also SQL Server reserved words. 

Partition view:
 A partitioned view joins horizontally partitioned data from a set of tables across one or more servers.
 Local partition View: A view that joins tables on the same instance of SQL Server.
 Global Partition View: A view that joins tables across servers. System stored procedure used to link the server: sp_addlinkedserver 'Server name'

Types of Triggers:
DML Triggers
Instead of trigger: Instead of Trigger fires an operation instead of performing user specified operation. After trigger:
After Trigger is executed after any insert, update and Delete operations performed on a table. AFTER triggers cannot be defined on views.

DDL Triggers
 CLR Triggers are based on CLR in .NET Framework. A CLR Trigger can be either AFTER Trigger or INSTEAD OF trigger. A CLR trigger can also be a DDL trigger. Instead of executing a T-SQL stored procedure, a CLR trigger executes one or more methods that are members of an assembly created in the .NET Framework and uploaded in SQL Server.

Global Variables: 
@@CONNECTIONS
@@MAX_CONNECTIONS
@@CPU_BUSY
@@ERROR
@@IDENTITY
@@IDLE
@@IO_BUSY
@@LANGID
@@LANGUAGE
@@MAXCHARLEN
@@PACK_RECEIVED
@@PACK_SENT
@@PACKET_ERRORS
@@ROWCOUNT
@@SERVERNAME
@@SPID
@@TEXTSIZE
@@TIMETICKS
@@TOTAL_ERRORS
@@TOTAL_READ / @@TOTAL_WRITE
@@TRANCOUNT
@@VERSION

Query to get the nth highest Salary: 
SELECT ENAME, SAL FROM EMP A
WHERE 1 = (SELECT COUNT(Distinct(SAL)) FROM EMP B WHERE A.SAL<=B.SAL) 

Query to delete the duplicate records: 
WITH CTE (COl1,Col2, DuplicateCount)
 AS
 (SELECT COl1,Col2, 
ROW_NUMBER() OVER(PARTITION BY Col1, Col2 ORDER BY Col1) AS DuplicateCount
 FROM DuplicateRcordTable) 

DELETE FROM CTE WHERE DuplicateCount > 1 GO 

RowNumber: Generates the RowId for each row.
Select ROW_NUMBER() OVER (Order by EmployeeID ASC) AS RowID,* 
From HumanResources.Employee 

The APPLY operator allows you to join 2 table expressions; the right table expression is processed every time for each row from the left table expression. The need of APPLY arises if you have table-valued expression on right part. 

The CROSS APPLY operator returns only those rows from left table expression (in its final output) if it matches with right table expression. 
Whereas the OUTER APPLY operator returns all the rows from left table expression irrespective of its match with the right table expression. For those rows for which there are no corresponding matches in right table expression, it contains NULL values in columns of right table expression.

CROSS APPLY is semantically equivalent to INNER JOIN. 
OUTER APPLY is semantically equivalent to LEFT OUTER JOIN. 

Sub Query is a query in query. The sub query must return a scalar (single) value for each row returned by the outer query. 

SubQuery rules:
1.The DISTINCT keyword cannot be used with subqueries that include GROUP BY.
2.ORDER BY can only be specified when TOP is also specified.

Correlated Sub query: 
A query is called Correlated sub query when both the inner query and the outer query are interdependent. For every row processed by the inner query, the outer query is processed as well. 

 
Differences between Stored Procedure and Function: 
extended stored procedures can be called from a function. 
1. Stored procedure is used to perform specific tasks. Normally function is used for computing value. 
2. Stored procedures may or may not return values. But function should return value, either scalar or table. 
3. Stored procedure cannot be used in the SELECT/WHERE/HAVING clause. But function can be called from SELECT/WHERE/HAVING clause. 

4. Stored procedure can run independently. It can be executed using EXECUTE or EXEC command. But function cannot run independently .
5. Temporary table (derived) cannot be created on function. But it can be created in stored procedures. 
6. TRY CATCH statements can be used in the stored procedures. But it cannot be used in the function. But we can use RAISE ERROR function to throw the exceptions. 
7. Stored procedure can call the user defined functions. But the function cannot call the stored procedures. 
Extended stored procedures can be called from a function. 

8. Stored procedures can have input and output parameters. But the function can have only input parameters.
9. Stored procedures can have SELECT and all DML operations like INSERT, UPDATE, DELETE. But the function can do only SELECT operation. 
10. Stored procedure can use transaction statements. Function cannot have the transaction statements. 

Difference Between Stored Procedure and Views:
Stored procedures are precompiled database queries that improve the security, efficiency and usability.

Views:
1.Does not accepts parameters
2.Can be used as a building block in large query.
3.Can contain only one single Select query.
4.Can not perform modification to any table.
5.Can be used (sometimes) as the target for Insert, update, delete queries.

Stored Procedure:
1.Accept parameters
2.Can not be used as a building block in large query.
3.Can contain several statement like if, else, loop etc.
4.Can perform modification to one or several tables.
5.Can not be used as the target for Insert, update, delete queries.

MERGE Statement: 
Used to modify data in a target table based on data in a source table. The statement joins the target table to the source table by using a column common to both tables, such as a primary key. You can then Insert, Update, Delete data from the target table — all in one statement — according to how the rows match up as a result of the join.

Sample code
MERGE BookInventory bi
USING BookOrder bo ON bi.TitleID = bo.TitleID 
WHEN MATCHED AND bi.Quantity + bo.Quantity = 0 
THEN DELETE 
WHEN MATCHED THEN 
UPDATE 
SET bi.Quantity = bi.Quantity + bo.Quantity 
WHEN NOT MATCHED BY TARGET THEN
INSERT (TitleID, Title, Quantity) VALUES (bo.TitleID, bo.Title,bo.Quantity); 

Common Table Expressions (CTE): 
A CTE is a "temporary result set" that exists only within the scope of a single SQL statement. (Single SELECT, INSERT, UPDATE, DELETE OR CREATE VIEW statement.)

 Advantages: 
1. Can be referenced multiple times in a query. 
2. Can be used to create recursive queries. 

Example: 
SELECT * FROM ( SELECT A.Address, E.Name, E.Age From Address A Inner join Employee E on E.EID = A.EID ) T WHERE T.Age > 50 ORDER BY T.NAME 

Rewriting the query using CTE expressions :
 With T(Address, Name, Age) AS

SELECT A.Address, E.Name, E.Age 
from Address A
INNER JOIN EMP E ON E.EID = A.EID
)
SELECT * FROM T WHERE T.Age > 50
ORDER BY T.NAME 

 Rewriting the query using CTE expressions:
  With T1(Address, Name, Age) AS
 (
SELECT A.Address, E.Name, E.Age 
from Address A
 INNER JOIN EMP E ON E.EID = A.EID
),
T2(Name, Desig) AS

SELECT NAME, DESIG FROM Designation
)
SELECT T1.*, T2.Desig 
FROM T1
WHERE T1.Age > 50 AND T1.Name = T2.Name
ORDER BY T1.NAME

General example:
WITH ShowMessage(STATEMENT, LENGTH) AS
(
SELECT STATEMENT = CAST('I Like ' AS VARCHAR(300)), LEN('I Like ')
UNION ALL
SELECT CAST(STATEMENT + 'CodeProject! ' AS VARCHAR(300)) ,
LEN(STATEMENT) FROM ShowMessage
WHERE LENGTH < 300
)
 SELECT STATEMENT, LENGTH FROM ShowMessage

Temporary Table: A local temporary table exists only for the duration of a connection in which it is defined. 
CREATE TABLE #tempLocal
(
NameId INT, 
Fname varchar(50),
Lname varchar(50) 


 A global temporary table remains in the database permanently, but the rows exist only within a given connection. When connection is closed, the data in the global temporary table disappears. However, the table definition remains with the database for access when database is opened next time. 
Created with ##.

Difference between CTE and Temporary table. When to use CTE and Temp table.
CTE is created in memory rather than TempDb database.
We can not create indexes and constraints on CTE.

CTE is used to crate a recursive query.

Temporary table and Table variable are crated in TempDb Database.

MAXRECURSION can be used to prevent a poorly formed recursive CTE from entering into an infinite loop.

Difference Between Delete and Truncate:
Delete command removes the rows from a table based on the condition that we provide with a WHERE clause. 
Truncate will actually remove all the rows from a table and does not allow filtered deletion using WHERE clause. 

TRUNCATE: 
TRUNCATE is faster and uses fewer system and transaction log resources than DELETE.
TRUNCATE removes the data by deallocating the data pages used to store the table's data, and only the page deallocations are recorded in the transaction log.

TRUNCATE removes all rows from a table, but the table structure, its columns, constraints, indexes remains. The counter used by an identity for new rows is reset to the seed for the column. You cannot use TRUNCATE TABLE on a table referenced by a FOREIGN KEY constraint. Because TRUNCATE TABLE is not logged, it cannot activate a trigger. 
TRUNCATE cannot be rolled back. 
TRUNCATE is DDL Command. 
TRUNCATE Resets identity of the table. 

DELETE: 
DELETE removes rows one at a time and records an entry in the transaction log for each deleted row. If you want to retain the identity counter, use DELETE instead. 
If you want to remove table definition and its data, use the DROP TABLE statement. 
DELETE can be used with or without a WHERE clause.
DELETE Activates Triggers. 
DELETE can be rolled back.
DELETE is DML Command.
DELETE does not reset identity of the table.

DELETE and TRUNCATE both can be rolled back when surrounded by TRANSACTION if the current session is not closed. If TRUNCATE is written in Query Editor surrounded by TRANSACTION and if session is closed, it can not be rolled back but DELETE can be rolled back.

UPDATE_STATISTICS updates the indexes on the tables accordingly, when large amount of deletion/modification or bulkcopy into the tables takesplace. 

SQL Profiler used to monitor events in an instance of SQL Server. You can capture and save data about each event to a file or SQL Server table to analyze later. 

CDC:
Change Data Capture records insert, update and delete activity in SQL Server tables. 
Steps: 1. Setup and configure CDC CDC is disabled by default.
declare @rc int
exec @rc = sys.sp_cdc_enable_db select @rc -- new column added to sys.databases: is_cdc_enabled

select name, is_cdc_enabled from sys.databases
exec sys.sp_cdc_enable_table @source_schema = 'dbo', @source_name = 'customer' , @role_name = 'CDCRole', @supports_net_changes = 1 
 2. Use CDC to extract rows that have been inserted, updated, deleted via T-SQL queries The __$operation column values are: 1 = delete 2 = insert 3 = update (values before update) 4 = update (values after update)

Index: 
A clustered index stores the actual data rows at the leaf level of the index. 
The leaf nodes of a non-clustered index contain only the values from the indexed columns and row locators that point to the actual data rows, rather than contain the data rows themselves. This means that the query engine must take an additional step in order to locate the actual data. 

When a clustered index is used, rows are stored in sequential order according to the index column value; for this reason, a table can contain only one clustered index, which is usually used on the primary index value.

Index Types:
Composite index: An index that contains more than one column. In Sql Server 2012, you can include up to 16 columns in an index, as long as the index doesn’t exceed the 900-byte limit.

Unique IndexAn index that ensures the uniqueness of each value in the indexed column.

Covering index: includes all the columns that are used to process a particular query. 

1. Create Indexes on the columns that are frequently used. 
Indexing on columns used in the WHERE clause improves the query performance.
2. We can evaluate the selectivity of index by running sp_show_statistics stored procedure. 
For eg: sp_show_statistics_steps 'orders', 'customerid' 
sp_show_statistics 
sp_show_statistics_columns
3. Create Multi-columns indexes 
Eg: create index Idx_Employee on Employee(LastName Asc, FirstName Asc)

You should "reOrganize" indexes when the External Fragmentation value for the corresponding index is between 10-15 and the Internal Fragmentation value is between 60-75. Otherwise, you should rebuild indexes.

NOLOCK: ignore locks and read directly from the tables. It will read uncommitted data. (Read Uncommitted) 

System Databases in Sql Server
Master
Model 
MSDB 
TempDB 
Resource 
Distribution 
ReportServer 
ReportServerTempDB 

Limitation on View
We have to drop or recreate view to alter a view.
 We need to ON, ANSI NULLS and QUOTED IDENTIFIER options.


Error Handling: Try catch Throw
If any error occurs in <regular code>, execution is transferred to the CATCH block and the error-handling code is executed. Typically, your CATCH rolls back any open transaction and re-raises the error, so that the calling client program understand that something went wrong.
TRY-CATCH does not catch compilation errors.

SET XACT_ABORT ON: any open transaction is rolled back and execution is aborted.
SET NOCOUNT ON: It suppresses messages like (1 row(s) affected) in the Message tab.

Some of the most common reasons for slow-running queries and updates are
1) CPU could be causing the bottleneck some of the common reasons are: Non-optimal query plan, A poor SQL Server configuration, Improper Application/Database design, Insufficient hardware resources.
 2) Slow network communication (high latency). The Windows System Monitor can be used to monitor the performance of SQL Server and non-SQL Server related components. Start -> Run -> PerfMon Under 'Data Collector Sets'
 3) Insufficient memory in the server computer, or the memory is not properly configured for SQL Server instance. You may use SQL Server Profiler to help identify the slow query or queries. 

To troubleshoot slow-performing queries, capture the following events: 
1. RPC:Completed: event fires after a stored procedure is executed as a remote procedure call.
 2. SP:StmtCompleted: event tells us when a statement within a stored procedure has completed. It also provides us the T-SQL code, along with the event's Duration, CPU, Reads, and Writes. 
3. SQL:BatchStarting event is fired whenever a new Transact-SQL batch begins. 
4. SQL:BatchCompleted event occurs when a Transact-SQL statement completes. SQL:BatchCompleted provides the name of the stored procedure, together with the Duration, CPU, Reads, and Writes of the statement.
 5. ShowPlan XML: displays the graphical execution plan of a query. 4) Insufficient useful statistics on indexed columns. set AUTO_CREATE_STATISTICS=True You can use Create Statistics, AUTO_CREATE_STATISTICS for creating and updating indexes. You can also use SQL Server Profiler to determine if query has enough statistics You can also use Graphical Execution plan from SQL Server Management studio to determine if query has enough statistics or not.

 5) Outdated statistics on indexed columns. what to do: statistics can be updated on non-indexed columns either manually, using SSMS or the UPDATE STATISTICS statement, or automatically, if the AUTO_UPDATE_STATISTICS database option is set to TRUE. use Auto_Create_Statistics property of the database to enable or disable the creation of statistics automatically. 
6) Insufficient useful indexes. 
7) Lack of useful indexed views.
 8) Lack of useful data striping.
 9) Improper partitioning of data.

INDEX SCAN: 
Scans all the pages in Non-clustered and Clustered Index. All the rows in the leaf level of the index are scanned. When a table has a Clustered Index, it will do a Clustered Index Scan and when the table does not have a clustered index, it will do a Table Scan. 

Table Scan: is a scan of the data pages of the table.

INDEX SEEK: traverse through B-TREE index structure from root down to the leaf. Index seeks are preferred over index scans. Index scans are preferred over table scans. 
   
DTA
DTA is a tool helps to figure out if additional indexes are helpful as well as
partitioning. Here is a summary of the options:
1.Adding Indexes (Clustered, Non-clustered and Indexed views)
2.Adding Partitioning
3.Adding statistics including multi-column statistics which are not created automatically
even when you have the AUTO_CREATE_STATISTICS database option set to ON.

Select the query in SSMS, right click and select 'Analyze Query in Database Engine Tuning Advisor', the tools starts up and select the options and start the analysis.
Select options in 'General' tab, then in 'Tuning Options' tab. 
 In SSMS, Tools --> SQL profiler Select options in 'General' tab, then 'Event Selection' tab and Run. 

Transaction Isolation Level: 
Isolation level controls how 2 or more transactions running simultaneously should be isolated from each other in terms of locking and blocking resources. Isolation level determines the level of concurrency and data consistency. 
SET TRANSACTION ISOLATION LEVEL { READ UNCOMMITTED | READ COMMITTED| REPEATABLE READ | SNAPSHOT | SERIALIZABLE } 

READ UNCOMMITTED: can read rows that have been modified by other transactions but not yet committed. It do not issue shared locks to prevent other transactions from modifying data read by the current transaction. It is a dirty read. 
READ COMMITTED: Statements cannot read data that has been modified but not committed by other transactions. This prevents dirty reads. 
SERIALIZABLE: Statements cannot read data that has been modified but not yet committed by other transactions. No other transactions can modify data that has been read by the current transaction until the current transaction completes.
Serializable isolation level provides the highest level of data consistency but at the cost of greatly reduced concurrency. Phantom read means 2 reads from a single transaction return a different number of records. 

How to identify blocking in SQL Server: 
1. sp_who2 
2. SQL Server Profiler: In the Event Selection tab --> select Show all events--> Errors and Warnings --> check the 'Blocked process report' and then run the application. 
3. SQL Server Management Studio Activity Monitor --> Rt click Instance Name --> Activity Monitor 
4. SQL Server Management Studio Reports --> right click | Reports | Standard Reports | Activity - All Blocking Transactions. 
5. The sys.dm_os_waiting_tasks DMV returns information about the tasks that are waiting on resources.
 6. sys.dm_exec_requests DMV provides details on all of the processes running in SQL Server. 

Tips to minimize Blocking: optimize Transact-SQL code; optimize indexes; break long transactions into multiple, smaller transactions; avoiding cursors 

Tips to avoid Deadlocks: 
1. Add a query time-out or a LOCK_TIMEOUT for each of the queries, or 
 2. Use a bound connection for the application. 
3. Use the NOLOCK query hint when using SELECT 
4. Change the default isolation level to read committed snapshot isolation (RCSI). The read query does not have to wait for the release of the lock. But this can cause phantom reads or non-repeatable reads. 

sp_who: Provides information about current users, sessions and processes in an instance of the SQL Server Database Engine.
sp_who2: provides information about current SQL Server processes associated with users, application, database, CPU time, etc.

To Kill the Session:
SELECT @@SPID
Kill <SPID> : To kill the session.

Which service requires to start a job? 
SQL Server Agent Service 

 What is the Resource database
All the system information pertaining to that specific instance is stored in the Resource database. When a service pack / hot fix is installed the resource database is updated. It cannot be backed up / restored using general backup / restore procedures from inside SQL Server. It has to be manually backed up similar to a file backup on a windows server. In SQL Server 2008 the location of the Resource database has been changed to the Binary directory Binn.

MSDB database stores information related to the configuration of SQL Server Agent Jobs, Job schedules, Alerts, Operators, so on..It is used to store the information related to the database backups and restore information and configurations..

  A Table has 2 triggers. How to execute the triggers in order?
sp_settriggerorder 
exec sp_settriggerorder @triggername = 'tr_customer_1', @order = 'first', @stmttype = 'insert', @namespace = null 
exec sp_settriggerorder @triggername = 'tr_customer_2', @order = 'last', @stmttype = 'insert', @namespace = null go 
SET NoCount ON 

The @@ROWCOUNT function is updated even when SET NOCOUNT is ON.

Which services are installed during Sql Server installation
SSIS
SSAS
SSRS
SQL Server (MSSQLSERVER)
SQL Server Agent Service
SQL Server Browser
 SQL Full-Text

Referential Integrity: is a method for ensuring the correctness of data in the RDBMS. Referential integrity refers to the consistency that must be maintained between primary and foreign keys, i.e. every foreign key value must have a corresponding primary key value.

 CREATE TABLE EmpMaster 
(EmpId INT PRIMARY KEY, 
EmpName VARCHAR(25));

 CREATE TABLE DeptChild 

DeptId INT PRIMARY KEY, 
DeptName VARCHAR(20), 
EmpId INT FOREIGN KEY REFERENCES EmpMaster(EmpId) ON DELETE CASCADE ON UPDATE CASCADE 
);

Different Ranking Functions: 
Ranking functions return a ranking value for each row in a partition.
1. RANK(): Returns the rank of each row within the partition of a result set.
2. DENSE_RANK(): Returns the rank of rows within the partition of a result set, without any gaps in the ranking.
3. NTILE: Distributes the rows in an ordered partition into a specified number of groups. The groups are numbered, starting at one. For each row, NTILE returns the number of the group to which the row belongs.
4. ROW_Number: Returns the sequential number of a row within a partition of a result set, starting at 1 for the first row in each partition.

 SELECT .. INTO is different from INSERT INTO SELECT. 
SELECT INTO tries to create the table in the INTO clause and then inserts the data from the SELECT into the table. 
INSERT INTO SELECT does not create the destination table. It will directly try to insert the result of the SELECT into the destination table.

Temp table Vs Table Variable
Temporary Table is created in TempDB.
The scope of temporary table is the session in which it is created.
Temp Table created with # sign. (# - Local, ## - Global).
You can create indexes and statistics on temporary tables.
You can also apply DDL statements against temporary tables to add constraints, defaults, and referential integrity such as primary and foreign keys.
You can also add and drop columns from temporary tables. 

Table variable is created as a Variable using DECLARE
eg: DECLARE @Cars Table
      (
      CarID int, ModelName Varchar(20)
      )

Table variables can not have Non-Clustered Indexes. 
Statistics can not be created against table variables. 
You can not create constraints and default values in table variables.
A table variable's lifespan is only for the duration of the transaction that it runs in. 

When to use Table Variable and When to use Temporary Table:
Use the Table variable for returning results from user-defined functions that return table values.
Use the Temporary table for storage and manipulation of temporary data; particularly when dealing with large amounts of data.

Similarities with temporary tables include:
1.Both are instantiated in tempdb.
2.Clustered indexes can be created on table variables and temporary tables.
3.Both are logged in the transaction log.
4.Just as with temp and regular tables, users can perform all Data Modification Language (DML) queries against a table variable: SELECT, INSERT, UPDATE, and DELETE.

Difference between CheckSum and HashBytes:
Checksum:
The Checksum function in SQL Server will return a numeric value based on the values of fields in a row. You can use the checksum value as a means of comparison when detecting changes in a row. Let's say, for example, you have 20 different columns that could change values. If a value changes, then the Checksum value for that row will change as well. 

Select ProductID,ProductNumber,
        Name,Color,ListPrice,ProductLine,
        Checksum(ProductID, ProductNumber, Name, Color, ListPrice, ProductLine) AS ChecksumValue
From Production.Product

Hashbytes:
The Hashbytes function returns a hash value of the specified column values. When using the Hashbytes function, you can specify a hash type. You can choose MD2, MD4, MD5, SHA, SHA1, or SHA2. 

Select ProductID,ProductNumber,
       Name,Color,ListPrice,ProductLine,
       HASHBYTES('MD5', Name + color + ProductLine) as HashValue
From Production.Product















The hash value is much longer and more complex than the Checksum value. There is a smaller chance of having duplicate values with the hash. 
However, there is an issue. With Hashbytes, if there is a NULL in one or more columns, then the hash will be NULL as well. To avoid this, we would need to replace the NULLs.

Select ProductID,ProductNumber,
       Name,Color,ListPrice,ProductLine,
       HASHBYTES ('MD5', Name + ISNULL(color, 'Unknown') + ISNULL (ProductLine, 'Unknown')) as HashValue
From Production.Product


Notice that the columns being included in the hash are not separated by commas, but concatenated.

Difference between CharIndex and PatIndex:
Similarities:
Both the CharIndex and PatIndex functions are categorized under the string functions in Sql Server.
Both the function returns the starting position of the matching pattern given in the function.

Differences:
The PatIndex function is used with the wildcard characters.
The CharIndex function can NOT be used with the wildcard characters.

SELECT (PATINDEX('%telli%', 'Business Intelligence'))

SELECT (CHARINDEX('telli', 'Business Intelligence'))

Difference between VARCHAR and NVARCHAR:
Varchar(n):
Non-Unicode Variable Length character data type.
It takes 1 byte per character.
Optional parameter value n can be from 1 to 8000. Can store maximum of 8000 Non-Unicode characters.

NVarchar(n):
Unicode variable length character data type. It can store both Non-Unicode and Unicode characters.
It takes 2 bytes per character.
Optional Parameter n value can be from 1 to 4000. Can store maximum 4000 Unicode / Non-Unicode characters.

Difference between CAST and CONVERT:
Both are used to convert data from one type to another. 

CAST is part of the ANSI-SQL specification; whereas, CONVERT is not.  In fact, CONVERT is SQL implementation specific.

CONVERT differences lie in that it accepts an optional style parameter which is used for formatting.

Difference between STUFF vs REPLACE:
The STUFF function inserts a string into another string. It deletes a specified length of characters in the first string at the start position and then inserts the second string into the first string at the start position.


REPLACE function replaces all occurrences of a specified string value with another string value.