Saturday, July 31, 2010

MSSQL | A relational DBMS from Microsoft that is a major component of the Windows Server System

A relational DBMS from Microsoft that is a major component of the Windows Server System. It is Microsoft's high-end client/server database and is closely integrated with Microsoft Visual Studio and the Microsoft Office System. Numerous editions are available, including those for Enterprise, Developer, Workgroup and 64-bit platforms.

SQL Server was originally developed by Sybase and also sold by Microsoft for OS/2 and Windows NT. In 1992, Microsoft began development of its own version, but acknowledged Sybase as copyright holder of origin until 1994. Future revisions diverged in 1995 when Sybase renamed its product Adaptive Server Enterprise as a means of differentiation

MSSQL is a database system from Microsoft, mostly used on high traffic web sites running on the Windows platform.

SQL Server has ever-improving functionality that helps us peek into, shred, store, manipulate and otherwise utilize XML.

Apart from online retailers and web publishers, most businesses today have become web-centric and have set foot in to the domain of e-commerce quite willingly. SQL Server provides all the tools needed to create powerful e-commerce applications.

Entering and exchanging data or OLTP (Online transaction processing) is only one part of database management. OLAP or Online Analytical Processing services make it possible to analyze high-level of aggregation of data and trace patterns. SQL Server Analysis Services is a direct descendant of SQL Server 7 OLAP Server but with vastly enhanced services.

Data mining helps users analyze data in voluminous relational databases and multidimensional OLAP cubes to uncover hidden patterns that can be used to predict future trends. SQL Server allows the use of clustered algorithms that help record data that exhibit similar and predictable characteristics into clusters. For example, you could record the behavior of a potential buyer and base your marketing campaign on these results.

This is a separate program that indexes all sorts of information from most of the Back Office products. Digitally stored information is in the form of unstructured textual data and saved in a plain text file or formatted documents. Full text Search enables access to data in a uniform manner.

SQL Server Reporting Services is a powerful solution that enables the authoring, management, and delivery of both paper-oriented reports and interactive Web-based reports. With SQL Reporting Services, organizations can create reports to be published to the Report Server using Microsoft or third-party design tools that use Report Definition Language (RDL), an XML-based industry standard. Report definitions and resources are published and managed as Web services and users can view reports in Web-based formats or via email.

Almost every web developer has a favorite database that he/she feels comfortable working with as all the tricks & gimmicks are already experienced.It is understandable why these databases are frequently used; they are well-documented, have a community behind them, integrated with most popular CMSs', easy-to-use, offered by most of the hosting companies ,etc.

But there are also many other databases which are getting popular day-by-day & may have advantages over what you're already using.

All activities in SQL Server can be performed by sysadmin server role members and overall database functions are completely controlled by them. The function of the members of the serveradmin server role is to change server configuration parameters and shutting down of the server. Setupadmin server role members can add or remove linked servers, manage replication and extended stored procedures. Some system stored procedures like sp serveroption can be executed by this role. Creation and management of server logins and auditing and reading error logs are done by securityadmin server role members. Management of the processes that run in SQL Server is done by processadmin server role members. Database creation, alteration and resizing are the functions of dbcreator server role members. Disk files are managed by diskadmin server role members.

Now that PHP runs so well on Windows web servers and speaks natively with SQL Server, there's no longer a need to keep PHP and MS SQL Server separate. The benefits of both are available to use. This article provides instructions on how to enable the sybase or mssql modules in your PHP installation and how to use SQL Server with the DB package.

Most of the articles about using PHP for database applications talk about using it in conjunction with MySQL. If they really want to stretch themselves, they talk about PostgreSQL. The business world uses a different standard though, and Microsoft's SQL Server can be found in most corporate data centers.

Traditionally the installation of a Microsoft architecture meant that it was Microsoft and closed source software all the way. If you needed to talk to a Microsoft SQL Server database, you used Microsoft development tools exclusively. Likewise, if you needed the features of PHP, you stuck to open source database engines. Adding MySQL or PostgreSQL to the database server mix made for some interesting programming to synchronize data, and it made life harder for the system administration staff.

Now that PHP runs so well on Windows web servers and speaks natively with SQL Server, there's no need for this unnecessary division. PHP can be a full corporate citizen. I am experienced with development in both Microsoft and open source technologies, and I admit that they both have their benefits and their limitations. Now I can take the benefits of both. I can have the power of SQL Server with a good programming language that I enjoy using.

This article is making the assumption that you are already familiar with writing PHP database applications. It also assumes that you have at least some familiarity with SQL Server.

HostingPalace has the innovative webhosting technology to implement in the web hosting domestic market. The web hosting Panel provided by HostingPalace as control panel of your domain is one among the best in the current market. It is your domain control panel, from where you manage all aspects of your domain and its contents. The domain control panel provided by web hosting company has been designed with the intention of making it easier for an individual to even act as a domain registrar possessing an authority to register a domain for self or for its clients and every now and then on the basis of requirement he can modify the web hosting account with every new update hence the domain resellers can benefit from such technology with the authority to register domain for its clients. It has become more user-friendly and more reliable.

When you access your web hosting account, everything you need is available right there in hosting panel or domain control panel itself.

The main tools available within your panels let you do the basic domain and webspace administration required to keep your website in order. You can set or reset your login details, ftp details and email accounts from web hosting control panel. You can access and maintain all your databases from your web hosting control panel as well, review basic statistics of your website, check your bandwidth use, check which scripts are supported, block certain IP addresses(depends on the web hosting package terms) from accessing your website, check for and clear up viruses, make a backup of your entire site, and other general maintenance actions, or grooming, of your domain.

If your web hosting plan allows it, you can actually set up different domains within your single account and control them all through your hosting or domain control panel.

Within your webhosting panel, you will more often than not find a handy little extra applications called file manager. It is what made easier for a client to deploy website files in webspace without taking the help of ftp account.Inbuilt feature of webhosting panel helps doing so, and this brilliant tool really comes as a handy element for hosting resellers who in this case every now and then need not have to memorize or search for ftp login information for different domains of its clients . It is not restricted to limited upload or download. One can deploy unlimited files in its webspace using the feature however restricted to limited upload in some online software at one time due to unavailability to browse for unlimited files. Bandwidth doesnt get much affected with such move and unlimited upload and download can easily be taken into process(incase the package has unlimited webspace and bandwidth facility).

Some hosting panel has the feature of adding java applications separately to its webspace package. As java is an important and widely used application most of the web hosting companies will make it sure the compatible features to enable the java application resides in the online software or control panel.

Many web hosting companies have added online shopping application in its webspace packages which helps clients to add up the application to its website at ease rather swaying the process of manipulation and editing. These comes free nowadays with web hosting packages. Most of the web hosting companies have included such applications for free in its web hosting package to let their client get benefited for hosting web applications.

HostingPalace has started providing free search engine submission for its 2 years old clients.Based on the demand of our clients which they are looking to get for free along with their web hosting i.e webspace packages HostingPalace has stated implementing this technique to help our clients to submit their website URL for free in over 8 lakh search engines.

Any code that runs under the Common Language Runtime (CLR) hood is managed code. The CLR is the core of the .NET environment providing all necessary services for the execution of the managed code. SQL Server 2005 has a tight integration with the CLR which enables the developer to create stored procedures, triggers, user defined functions, aggregates and user defined types using the managed code.

MS SQL suits best when you need to perform little procedural logic and access data from the server. If your data needs to undergo complex logic then the better option is using managed code. When working on data intensive operations, working in MSSQL would be an easy approach, but MS SQL lacks the ease of programming. You might end up in lot of lines of coding in trying to simulate operations that are specific to character, string operations, arrays, collections, bit shifting and so forth. While working with mathematical operations and regular expressions, you might need a language that provides an easy, clean yet powerful way of handling such operations. If you encounter situation where you need to perform such operations using MSSQL, then it is really going to be annoying.

Integrating DML operations with managed code also helps bifurcate logic into classes and namespaces, which is somewhat similar to what we have schemas in the database. Now saying this, it should be understood that integrating CLR into SQL Server does not replace the business tier of your application. The benefits of integrating the CLR with SQL Server include:

The MSSQL statements that you execute actually run on the server end. But at times when you want to distribute the load between the client and server, you could go with the managed code. So using managed code you could perform critical logic operations in client side so that the server could be busy only with data intensive operations.

The fact that SQL Server provides you extended stored procedures to avail certain system related functions from your MSSQL code, but at the same time you may have to compromise with the integrity of the server. When it comes to managed code, it provides type safety, effective memory management and better synchronization of services which is integrated tightly with the CLR and, hence, the SQL Server 2005. So this means that integrating CLR with SQL Server provides a scalable and safer means for accomplishing tasks which are tougher or almost impossible using MSSQL.

.NET Framework provides a rich support for handling XML based operations from managed code; although realizing the fact the SQL Server supports XML based operations, you could perform such operations using .NET with little effort when compared to using MSSQL scripts.
Nested transactions in MSSQL have limitations when dealing with look back connections, whereas this could be better achieved using managed code by setting the attribute "enlist=false" in the connection string.

When working MSSQL you may not be able to fetch rows which form the middle of the operation from a result set until the execution gets finished. This is termed as pipelining of results which could be achieved with CLR integration.

If you could check your database configuration you could notice that the CLR Integration is turned off by default. Enabling or disabling of CLR integration could be done by setting the "clr enabled" option to 1 or 0. Once the CLE integration is disabled, all the executing CLR procedures are unloaded across all application domains.

Performance factor is a measure of the amount of response time that you get upon any operation that you perform against the server. Modern databases are designed in a way that they would not halt the business with increasing load. But, the performance factor of the database in an enterprise project is usually given a low priority in the initial stages of design. Poor database design may lead to slow running transactions, excessive blocking, poor resource balancing and so forth which could cost excess amount of time and money to maintain.

So why do we need to care about performance any way? Better performance provides faster transactions and good scalability. This would cause more batch processing jobs to be done in less time with a low down time. Increased performance would help to gain better response time for the users and provide faster services even on increased load operations. Performance factor should be considered from the day we start designing our database. As the complexity of the design increases, it becomes harder and harder to pull out the design issues in order to get a better performance.

There are many techniques where we could monitor and improve the performance, but we shall limit it now to certain tips that will help us to fine tune the database.

SQL Server 2005 introduces a feature called Row Level Versioning which allows effective management of concurrent access to the data while maintaining the consistency of data. Usually an isolation level decides an extent on how the modified data is isolated from other. RLV benefits when data is accessed across isolation levels because they help to eliminate the read operation locks, thus improving the read concurrency. The fact is that the read operations do not require shared locks on the data when it is running on isolation levels with RLV, this eventually does not block other requests accessing the same data, and that is how the locking resources are minimized. On the other hand, when it comes to write operations, two write requests cannot modify the same data at the same time.

Triggers working on INSERT and DELETE operations change the versions of the rows. So triggers that modify data will benefit from RLV. The rows of the result set is versioned, when an INSERT, DELETE or UPDATE statement is issued prior the data was accessed using the SELECT statement.

Transactions greatly affect the data when you perform CRUD operations. Transactions may also be executed in batch with many requests operating on a single or a row set. So when a transaction modifies a row value the previous committed row value is stored as version in tempdb.

By setting the READ_COMMITTED_SNAPSHOT and ALLOW_SNAPSHOT_ISOLATION options to ON, logical copies are made on the modified data by the transactions, and a transaction sequence number is assigned to every transaction that operates on the data using Row Level Versioning. The transaction sequence number is automatically incremented each time the BEGIN TRANSACTION statement is executed.

So the changes to the row are marked with transaction sequence numbers. These TSN's are linked with the newer rows that reside in the current database. Now, the TSN's are monitored periodically and numbers with least use are deleted from time to time by the database. So it is up to the database which actually decides how long the row versions have to be stored in the tempdb database.

Now, the transaction sequence numbers are tracked periodically and transaction sequence numbers with least use are deleted from time to time. As a matter of fact, the read operations do not require shared locks on the data when it is running on isolation levels with Row Level Versioning, which eventually does not block other readers or writers accessing the same data, as a result the locking resources are minimized. On the other hand, when it comes to write operations, two writers cannot modify the same data at the same time.

The READ_COMMITTED_SNAPSHOT and ALLOW_SNAPSHOT_ISOLATION options should be turned on in order that the transaction isolation levels such as READ_COMMITTED and SNAPSHOT make use of the RLV system. The read committed isolation level supports distributive transactions unlike the snapshot which does not. The temporary database "tempdb" is extensively used by the SQL Server to store its temporary result sets. All the versions are stored in the tempdb database. Once the database has exceeded its maximum space utilization the update operations stops generating versions. The applications that leverage the row committed level transactions does not even need to be re-factored for enabling the RLV and also it consumes less storage space of the tempdb database, and for these reasons the read committed isolation level is preferred over the snapshot isolation.

Row Level Versioning help in situations where an applications lot of insert and update operations on the data and at the same time a bunch of reports are accessing in parallel. It could also prove beneficial if your server is experiencing relatively high deadlocks. Also for systems performing mathematical computation, they require accurate precision and RLV gives a greater amount of accuracy for such kind of operations.

Error handling was a pretty tough job in the earlier versions of SQL Server. Developers had to perform lot of conditional checks for the error code returned after each INSERT, UPDATE or DELETE operations. Developers must check the @@ERROR attribute each time they might see the possibility of an error based operations. Error messages could be generated either from the SQL Server or thrown explicitly by the user. Let us first see how developers usually perform the error handling in SQL Server 2000. We shall have a stored procedure for our demonstration.

The INSERT statement conflicted with the FOREIGN KEY constraint "FK_Employee_Contact_ContactID." The conflict occurred in database "AdventureWorks" table "Person.Contact" column "ContactID."

The statement has been terminated.

As you could see, the error message has Msg, Level of severity, State and Line. The "Msg" holds the error number generated for the message, in this case 547. All the error messages are defined in the table called sys.messages. If you are going for custom error handling then you can utilize the sp_addmessage system procedure to implement new error messages.

Next, you have the severity "Level" of the message. Severity codes lie in the range of 0 to 25. Any error message above 20 will terminate the connection. Error messages from 17 to 20 specify a resource problem, from 11 to 16 specify error messages in the T-SQL scripts, and error messages below 11 specify warnings.

Next we have the "State" of the error message. This is an arbitrary integer range falling in between 0 to 127. This provides information on the source that has issued the error message. However, there is not much documentation disclosed by Microsoft on this.

Next is the "Line" number which tells us the line where the error has occurred in the procedure or T-SQL batch. And the last one is the message itself.

The understanding and implementation of the error handling in the earlier versions of SQL Server 2005 was fair enough, but was with a lot of housekeeping activity. SQL Server 2005 provides flexible means to handle error handling mechanism by using the TRY and CATCH blocks.

When an error occurs, the statement is terminated from the current execution point and it enters the catch block. The functions following the PRINT statements are built-in functions that provide information on the error message. You could also embed the code from start to end in a stored procedure and call it wherever you need. You can also log the error messages in a table for debugging purpose. AdventureWorks database handles error handling in similar manner; you could find procedures uspLogError and uspPrintError which do the job.

You could also use the RAISERROR to define you own custom error messages. The RAISERROR may take a system error code or a user defined error code which would be eventually fired by the server to the connected application or within the try..catch block.

A serious error has terminated the program. Error message is ...statement conflicted..., Error code is 52000.

Next time when you are working with MsSQL code, you need not really worry about implementing numerous checks for errors. The TRY..CATCH feature helps with a better approach for error handling which would minimize the size of your code, thus improving readability.

Concurrency could be defined as an ability of multiple sessions to access a shared data at the same time. Concurrency comes in to picture when a request is trying to read data and the process prevents the other requests to change the same data or vice versa.

The RLV discussed above allows concurrent access automatically with no additional application control to avail this feature. Now any relational database could support multiple and simultaneous connections to the database. The job of handling concurrencies between the requests is usually handled by the server. SQL Server internally takes care of the blocking issues between two or more processes. But sometimes it may be necessary to take over some part of the control over the concurrent access to maintain the balance between data consistency and concurrency.

There are two kinds of concurrency control: Optimistic concurrency control and pessimistic concurrent control. SQL Server has a pessimistic concurrency model by default. So by default, other transactions could not read the data until the current session commits, which in this case is a writer block. Locking could prove a good choice for many of today's database systems, but it may also introduce blocking issues. If the results are based on only the committed data then the only option left is to wait until changes are committed.

To put it in a straight forward manner, a pessimistic concurrency control the system is pessimistic. It assumes that a conflict will arise when a read operation is requested over the data modification of other users. So in this case locks are imposed to ensure that the access to the data is blocked which is being used by other session.

But in the case of optimistic concurrency, it works with an assumption that any request could modify data which is being currently read by another request. This is where the row level versioning is being used which checks the state before accessing the modified data.

One of the common mistakes developers often perform is to execute MSSQL statements directly from their application. Worse, the performance may get degraded if the combinations of operators, such as LIKE and NOT LIKE, are used with the statements. It is always a good practice to use stored procedures rather than stuffing queries in your application or a web page. Stored procedures help improve the performance since they are precompiled.

Use minimal string operations as are often costly, so do not use them in a JOIN condition. Using implicit or explicit functions in the where clause might impact your server. Also, using complex business logic in triggers is yet another performance issue. When you are working with transactions, always use isolation levels. Proper utilization of isolation levels will help reduce locking and also avoid dirty reads and writes.

If possible, avoid using CURSOR's. The other way around is to use temporary tables with WHILE statements and break complex queries into many temporary tables and later joining them. Also when you are working with large tables, select only those rows or columns that are needed in the result set. Unnecessary inclusion on columns and rows will congest the network traffic which is, again, a bottleneck for performance.

Create indexes only when they are really required because SQL Server needs to arrange and maintain records for each index that you define. To make sure that you are creating for the right purpose, you can create indexes on columns which are used in WHERE condition and ORDER BY, GROUP BY and DISTINCT clauses. Indexes which are not used may cause extra overhead. Also, it is always recommended to have smaller cluster indexes and moreover define a data range for the cluster indexes that you maintain. Once you define a column as foreign key, it is a good practice to create an index on it. You can also use the index tuning wizard to check the index performance and be sure to remove unused indexes.

The way you design your database impacts greatly on the performance of SQL Server. When you are working with tables, always use proper data types for the columns. If your data has very large chunks of characters then you can go with text data type. Check if proper primary and foreign key relationships are defined across various tables. Make a practice of normalizing your database first and then work around de-normalizing for improving any performance. You may use indexed views for de-normalization purpose. Analysis job usually takes more of the system resources, so it is recommended to use separate servers for Analysis and Transaction processing jobs.

Do not use prefix in your stored procedures, i.e., do not prefix them with sp_. Microsoft ships system procedures which are prefixed with sp_. So if you prefix your procedures with sp_, SQL Server will first search in the master database for the procedure and then in your application database. Again, this is a bottleneck.

Always use exception handling if you are working with transaction based procedures. Proper error handling ensures security and provides a better approach of what to do when an unexpected error occurs.

If you do not want your client application to check the rows affected for an operation, then it is advisable to use SET NOCOUNT ON in your stored procedures. Not using this would send the number of rows affected to the client application or ADO/ADO.NET. The client application would further work on this result through the command or connection objects. This could cause extra overhead on both the client and server.

Set your database size initially instead of allowing it to grow automatically. To minimize disk reads and writes, you may create the log file and tempdb into separate devices from the data. You can utilize RAID configuration with multiple disk controllers if the database performs large data warehouse operations. Have an optimal memory for your server and perform index fragmentation as and when needed. You can go with the automatic database shrink option to manage unwanted space. However, it is recommended that you use default server configuration for your application.



--
http://www.co5.in/

0 comments:

Post a Comment

 

Complete Online Solution | Make the internet world into your hands Copyright © 2009 Community is Designed by CO5 | Web designing | Web hosting