SQL Developers and DBAs help in getting the faster queries to many time tested methods for achieving the goal. Here are 10 rules for making the database faster and more efficient.
1) Do not use cursors always
There is a drastic decrease of speed if your cursors are used. There could be block of operations between the two which is actually longer than required. Thus concurrency in your system is decreased.
2) If you cannot avoid using them
If at times you are unable to stop the usage of the cursor then run cursor operations against a temp table instead of a live table. The only update statement against the live table is very meager and it is a short time lock holder that can enhance concurrency greatly.
3) Intelligently use the temp tables
There are lots of situations where you can use the temp table for instance if you are joining a table to a large table then there is a condition on that large table for enhancing the performance by taking out the subset of data you require from the large table into a temp table and joining with that instead. For greatly decreasing the processing power this can help you a lot if you have lots of queries in the procedure for making similar joins to the same table.
4) Pre-stage your data
An age old technique that has been overlooked is this technique. You can pre-stage the data by joining the tables if you have procedures that will do same joins to large tables. This will give the results ahead of time and persisting them in a table. For avoiding the large join the reports can be running against the pre-staged table. In most of the environments there are popular tables getting joined all the times but this techniques is not used widely. For saving server resources it is an excellent way and there is a basic point why they cant be restaged.
5) Reduce the use of nested views
For concealing large queries from users views are used but while nesting one view within another that’s again inside another view (so on) you will lead to lack of performance. Your optimizer will return nothing or will slow down if too many views are nested in massive amounts of data returned for every query. The query response times will fall from minutes to seconds if you are reducing the nested views.
6) Prefer table-valued functions instead of scalar
While using a scalar function in the select list of a query for boosting the performance by changing it to a table-valued function and include cross apply in the query. You can reduce the query in half times.
7) Partitions in SQL server
Users of SQL Server Enterprise can take benefit of the data engine partitioning features which are automatic to speed performance. You can find simple tables are created as single partitions in SQL server that can later split into multiple ones as required. For moving large amounts of data between data tables you need to use switch command instead of insert and delete.
8) Delete and update in batches
Large amounts of data from huge tables would be a tough one if you want to delete or update it. Both of them run as a single transaction for killing them if something happens during the work. The system needs to roll back the entire transaction. So in such cases other transactions are also blocked leading to a grid lock state.
9) Take your time
It is not necessary that both deletion and updation should be carried out on the same day. You can take your time by doing the tasks in small batches there by reducing the load on the system.
10) Avoid the ORMs
Majority of issues today are caused by Object Relational Mappers. It is very difficult to avoid them but you can minimize them by writing your own stored procedures and have the ORM to be called.
Stay connected to CRB Tech for more technical optimization and other updates and information.