Category Archives: Uncategorized

Oracle Certification Now Teaches You DBMS Architecture

Oracle Certification now teaches you DBMS architecture

What does Two-Tier Framework mean?

A two-tier architecture is an application package architecture in which a presentation layer or interface operates on a customer, and a data layer or data structure gets saved on a server. Breaking these two elements into different places symbolizes a two-tier architecture, as instead of a single-tier architecture. Other kinds of multi-tier architectures add extra levels in allocated application style.

You can join the oracle dba course to know more about this field.

Two-Tier Architecture

Experts often compare a two-tier architecture to a three-tier architecture, where a third program or business part is included that serves as a middleman between the client or presentation layer and the data layer.

This can improve the efficiency of it and help with scalability. It can also remove many types of problems with misunderstandings, which can be triggered by multi-user accessibility in two-tier architectures.

However, the innovative complexness of three-tier architecture may mean more cost as well as.

An extra note on two-tier architecture is that the term “tier” generally relates to splitting the two application layers onto two different physical components of hardware.

Multi-layer programs can be designed on one level, but because of functional choices, many two-tier architectures use a computer for the first level and a web server for the second tier.

The design of a DBMS relies upon on its architecture. It can be central or decentralized or ordered.

The architecture of a DBMS can be seen as either single tier or multi-tier.

An n-tier architecture distinguishes the whole program into relevant but separate n segments, which can be individually modified, altered, changed, or replaced.

In 1-tier architecture, the DBMS is the only enterprise where the customer directly rests on the DBMS and uses it.

Any changes done here will straight be done on the DBMS itself. It does not provide useful resources for end-users. Databases developers and developers normally want to use single-tier architecture.

If the architecture of DBMS is 2-tier, then it must have a software through which the DBMS can be utilized. Programmers use 2-tier architecture where they connect to the DBMS by indications of a software.

Here the application level is entirely separate of the database with regards to function, style, and development.

3-tier Architecture

A 3-tier architecture distinguishes its levels from each other based on the complexness of customers and how they use the information existing in the database.

It is the preferred architecture to design a DBMS.

Database (Data) Tier − At this level, the database exists along with its query handling ‘languages’. We also have the interaction that determine the information and their restrictions at this level.

Application (Middle) Tier − At this level live the applying web server and the programs that connect to the database. For a customer, this program level provides an abstracted perspective of the database. End-users are unacquainted with any lifestyle of the database beyond the applying. At the other end, the database level is unaware of any other customer beyond the applying level. Hence, the applying part rests in the center and provides an arbitrator between the end-user and the database.

User (Presentation) Tier − End-users function on this level and they know nothing about any lifestyle of the database beyond this part. At this part, several opinions of the database can get offers for by the application. All opinions are produced by programs that live in the application level.

Multiple-tier database architecture is extremely changeable, as almost all its elements are separate and can be customized individually.

Thus you can join the best oracle training to make your career in this field.

Don't be shellfish...Digg thisBuffer this pageEmail this to someoneShare on FacebookShare on Google+Pin on PinterestShare on StumbleUponShare on LinkedInTweet about this on TwitterPrint this pageShare on RedditShare on Tumblr

You will get DBA Jobs If You Learn What is Storage System, Hurry Up!

Databases are saved in file formats, which contain records. At physical level, on some device, there are some actual data which is stored in the electromagnetic format. These storage space gadgets can be broadly categorized into three types −

Primary Storage − The storage space that is directly accessible to the CPU comes under this category.

CPU’s internal storage space (registers), fast memory (cache), and main memory (RAM) are directly accessible to the CPU, as they are all placed on the motherboard or CPU chipset.

This storage space is typically very small, very fast, and unstable.

For maintaining the data, this storage requires continous supply of power.

In case of a power failure, all its information is lost.

Join the dba certification course to know about storage space.

Secondary Storage − Secondary storage space used to storedatathrough backup.

This storage includes storage gadgets that are not a part of the CPU or motherboard, for example, magnetic disks, optical disks (DVD, CD, etc.), hard disks, flash drives, and magnetic tapes.

Tertiary Storage − Tertiary storage space is used for storing large number of data.

Since such storage space gadgets are external to the pc, they are the slowest in rate. These storage space mostly used to take the back up of an entire system. Optical disks and magnetic tapes are widely used as tertiary storage space.

Memory Hierarchy

A pc has a well-defined hierarchy of storage. A CPU can access main storage as well as its inbuilt signs up.

The access time of the main storage is obviously less than the CPU rate.

To minimize this rate mismatch, cache memory is introduced. Cache storage provides the fastest accessibility time and it contains information that is most frequently accessed by the CPU.

The storage with the fastest accessibility is the costliest one. Larger storage space gadgets offer slow rate and they are less expensive, however they can store large numbers of information as compared to CPU signs up or storage cache storage.

Magnetic Disks

Hard disk drives are the most common secondary storage space gadgets in the current generation of computers.

Magnetic disk is the name given to it, because they use the concept of magnetization to store information.

Hard disk consist of metal disks coated with magnetizable material. These disks are placed vertically on a spindle.

A read/write head moves in between the disks and is used to magnetize or de-magnetize the spot under it. A magnetized spot can be recognized as 0 (zero) or 1 (one).

Hard disks are formatted in a well-defined order to store information efficiently. A hard disk plate has many concentric circles on it, called tracks. Every track is further divided into sectors. A sector on a difficult drive typically stores 512 bytes of information.

Redundant Range of Independent Disks

RAID or Redundant Array of Independent Disks, is a technology to connect several secondary storage space gadgets and use them as a single storage space media.

RAID consists of an array of disks in which several disks are connected together to achieve different goals.

RAID levels define the use of hard disk arrays.


In this stage, a striped range of disk is implemented. The information is broken down into prevents and the prevents are distributed among pushes. Each difficult drive receives a prevent of information to write/read in parallel. It enhances the pace and performance of the difficult drive. There is no equality and backup in Level 0.


RAID 1 uses mirroring techniques. When information is sent to a RAID controller, it makes a copy of the data and then forward it to the array of disks. RAID stage 1 is also called mirroring and provides 100% redundancy in case of a failure.


RAID 2 stores Error Correction Code using Hamming distance for its information,shared on different disks. Like stage 0, each information bit in a word is recorded on a separate disk drive and ECC codes of the information words are saved on a different set pushes. Due to its complex structure and high cost, RAID 2 is not commercially available.


RAID 3 shares the information onto several disks. The equality bit produced for information word is saved on a different difficult drive. This technique makes it to overcome single disk drive failures.


In this stage, an entire combination of data is written onto information pushes and then the equality is produced and saved on a different disk drive.

Note that stage 3 uses byte-level striping, whereas stage 4 uses block-level striping. Both stage 3 and stage 4 require at least three disk for implementation of RAID.


RAID 5 writes whole information combined onto different disks, but the equality bits produced for information combined stripes are distributed among all the information disks rather than storing them on a different dedicated difficult drive.


RAID 6 is an extension of stage 5. In this stage, two independent parities are produced and saved in shared fashion among several pushes. Two parities provide additional fault tolerance. This stage requires at least four hard disk drives for application of RAID.

Thus our dba institute in Pune is more than enough for you to make your career in this field.

Don't be shellfish...Digg thisBuffer this pageEmail this to someoneShare on FacebookShare on Google+Pin on PinterestShare on StumbleUponShare on LinkedInTweet about this on TwitterPrint this pageShare on RedditShare on Tumblr

Learn about Concurrency Protocol in Oracle Certification Courses in Pune

If you consider multiprogramming environment where various transactions work together at the same time, it is vital to manage the concurrency of dealings.

We have concurrency management methods to make sure atomicity, solitude, and serializability of contingency dealings.

Concurrency management methods can be subdivided into two classes −

  1. Lock based protocols
  2. Time stamp based protocols

Learn all these as there are many database jobs in pune for freshers.

Lock-based Protocols

Database techniques prepared with lock-based methods use a procedure by which any deal cannot read or edit data until it gets an proper lock on it.

Locks are of two kinds −

Binary Lock − A data item has a lock and it can be in two states; it is either locked or unlocked.

Shared/exclusive − This type of lock distinguishes the locks based on their uses. If a lock is obtained on a data product to carry out write function, it is a unique lock.

Enabling more than one deal to make on the same data product would cause the data source into an unreliable state.

Read locks are shared because no data value is being modified.

Lock protocols are of 4 different types −

Simplistic Lock Protocol

Simplistic lock-based methods enable dealings to get yourself a lock on every item before a ‘write’ operation is conducted.

After the completion of ‘write’ function, the transactions may unlock the data item.

Pre-claiming Lock Protocol

Pre-claiming methods assess their functions and make a list of items on which they need locks.

Before starting a performance, the deal pre claims the locks it needs in advance. If all the locks are provided, the deal carries out and unveils all the locks when all its functions are over.

If all the locks are not provided, the deal comes back and stays until all the locks are provided.

Two-Phase Lock 2PL

This locking method separates the performance stage of a deal into three areas.

In the first aspect, when the deal begins to work, it looks for authorization for the locks it needs.

The second aspect is where the deal gets all the locks.

As soon as the deal releases its first lock, the third stage begins. In this stage, the deal cannot demand any new locks; it only releases the obtained lock.

Two-phase securing has two stages, one continues to grow, where all the tresses are being obtained by the transaction; and the second stage is reducing, where the hair organised by the deal are developing.

To declare a unique (write) lock, a deal must first obtain a shared (read) lock and then update it to a unique lock.

Strict Two-Phase Locking

The first stage of Strict-2PL is same as 2PL. After obtaining all the lock in the first stage, the deal carries on to operate normally.

But contrary to 2PL, Strict-2PL does not to unveils lock after using it. Strict-2PL keeps all the lock until the commit point and unveils all the locks at one time.

Timestamp-based Protocols

The most widely used concurrency method is the timestamp centered method.

This method uses either program time or logical counter as a timestamp.

Lock-based methods handle the purchase between the inconsistent sets among dealings at the period of performance, whereas timestamp-based methods begin being soon as a deal is designed.

Every deal has a timestamp associated with it, and the purchasing will depend on the age of the deal. A deal designed at 0002 time time would be over the age of all other dealings that come after it.

For example, any deal ‘y’ coming into the program at 0004 is 2 seconds young and the concern would be given to the mature one.

In inclusion, every data item is given the newest study and write-timestamp.

This allows the program know when the last ‘read and write’ function was conducted on the data item.

Timestamp Ordering Protocol

The timestamp-ordering method guarantees serializability among dealings in their inconsistent read and write functions.

This holds to the method program that the inconsistent couple of projects should be implemented according to the timestamp principles of the dealings.

The timestamp of deal Ti is denoted as TS(Ti).

Read time-stamp of data-item X is denoted by R-timestamp(X).

Write time-stamp of data-item X is denoted by W-timestamp(X).

Timestamp purchasing method works as follows −

If a deal Ti provides a read(X) function −

If TS(Ti) < W-timestamp(X)

Operation refused.

If TS(Ti) >= W-timestamp(X)

Operation implemented.

All data-item timestamps modified.

If a deal Ti problems a write(X) function −

If TS(Ti) < R-timestamp(X)

Operation refused.

If TS(Ti) < W-timestamp(X)

Operation refused and Ti rolled back.

Otherwise, function executed.

Thomas’ Write Rule

This concept declares if TS(Ti) < W-timestamp(X), then the procedure is refused and Ti is rolled back.

Time-stamp ordering rules can be customized for making the routine view serializable.

Instead of getting Ti rolled back, the ‘write’ function itself is ignored.

You can join the sql dba training in Pune to make your career in this field.

Don't be shellfish...Digg thisBuffer this pageEmail this to someoneShare on FacebookShare on Google+Pin on PinterestShare on StumbleUponShare on LinkedInTweet about this on TwitterPrint this pageShare on RedditShare on Tumblr

Best Oracle Certification Course In Pune With Placement

Introduction Oracle Certification

There are many industries that use Oracle data base, choosing the profile of Oracle DBAservice provider would be extremely – money making job in the present and future. CRB Tech provides real-time and targeted Oracle DBA/OCA certification training. CRB Tech has coined the Oracle DBA/OCA certification course content and curriculum based on learners need so they can work as a database administrator by joining the database administrator course in Pune.

1.What is the future of dba course?

If we choose to see the job prospects in the field of data base especially for freshers, it is really of a great demand. If your aim is to become a dba professional then you need to have an oracle certification and the idea of a database administrator, to maintain the data in a professional way.

2.How to be a skillful dba professional?

Dont worry, here is where CRB Tech comes into the picture by providing you the bestOracle training courses in Pune and there is no doubt. And we offer 100% placement oriented training and will ensure that you are an oracle certified professional.

So you are welcome to get registered for the database administrator courseAnd it is the best dba course offered in Pune.

3. What is the eligiblity criteria for dba training institute in Pune?

Anybody who is really interested to attend the course in the sql training institutes in pune are eligible to do so. Pre-requisite education criteria for doing this course is a degree in computers.

Other candidate preferences:

  1. Excellent communication skills

  2. Dream as a dba professional

  3. Lateral entry after experience

4. Key points of the DBA course

Assured Job offer:

Jobs are provided with 100% assurance but furthermore you will have an amazing DBAcareer by our quality centered extensive training developed only for you.

German language extra benefit:

Training you in a foreign language will make your career special

Candidate apt infrastructure:

We have impressive lab functions and DBA classroom training which is very much comfortable and convenient for all the candidates who are willing to make their career through us.

Tie up with Mid level companies and MNCs:

Ocean of opportunities provided for you and we will also train you in being capable for the DBA jobs.

Curriculum created by innovative level trainers:

Many Professional experts and industry specialized teachers have put their brains to coin this method for your future knowledge

Campus drives solely for you:

Candidates are provided with varied kinds of opportunities from mid level companies to MNCs through our DBA training institute in Pune.

Refining your enterprise presentation skills:

Your business presentation capabilities can be more eye-catching by the training we give you through the sessions and classes you may have to carry out later on.

5. Criteria for placement through CRB Tech

Proper outfit

Communication in English

Non-freshers can get lifetime guarantee

Earn and Learn

Compulsory attendance

6. Certification

You can be an Oracle Certified Professional after the completion of our DBA course in Pune.

7. Placement:

Our previous candidates are placed in IBM,Max secure, Mind Gate, Saturn Infotech and if you relate the facts of the variety of learners placed it is 23. And we likewise have LOI (Letter of Intent) within 15 days of training and it is nothing but the document provided for the agreement between two parties.

8. Syllabus

    1. Introduction :

  • List the features of Oracle 10g

  • Discuss the theoretical and physical aspects of a relational database

  • Describe the Oracle implementation of the RDBMS and ORDBMS

  • Understand the goals of the course

  • Identify the major structural components of the Oracle Database 10g

  • Retrieve row and column data from the table with the SELECT statement

  • Create reports of sorted and restricted data

  • Employ SQL functions to generate and retrieve customized data

  • Run Data Manipulation Language (DML) Systems

  • Obtain metadata by querying the dictionary views

  • Group Discussion

2. Retrieving Data Using the SQL effect statement :

  • Capabilites of SQL select statements

  • execute a basic select statement

  • Arithmetic Expressions, Operator Presidence

  • Defining a Null Value, Null Values in Arithmetic Expressions

  • Defining a Column Alias

  • Concatenation operator, Literal Character Strings

  • Alternate Quote(q) Operator, Duplicate rows, distinct

  • SQL and iSQL * Plus interaction, Logging into iSQL*Plus, Displaying table structure

  • Interacting with script files

  • iSQL*Plus History Page

  • Group Discussion

3. Restricting and Sorting Data :

  • Limiting Rows using a selection

  • Where clause with character strings and Dates, Comparison Conditions

  • Between, IN, LIKE(%,-)Condition

  • Logical Conditions, Not Operator, Rules of Precedence

  • ORDER BY Clause, Sorting asc, desc

  • Substitution Variables

  • Define,verify

  • Group Discussion

Unit test 1

4 .Using Single row functions to customize output :

  • Types of SQL function

  • Single Row functions

  • Character Functions

  • Using Case-Manipulation functions

  • Character Manipulation functions

  • Using the character manipulation functions

  • number function

  • Group discussion

  • Round, Trunc, MOD, sysdate, Function

  • Working with dates, RR Date Format

  • Arithmetic with dates

  • Date Manipulation Function

  • Conversion Function

  • Nesting Function

  • General Functions(NVL, NVL2, NULLIF, Coalesce, Case Expression, Decode Function)

  • Group Discussion

5 . Reporting Aggregated  Data Using Group functions :

  • Group Functions

  • min,max, count, avg, sum

  • group by clause

  • having clause

  • nesting group functions

  • Group Discussion

6. Display Data Using Multiple Tables : 

  • Types of joins

  • Cross join

  • Natural join

  • Using Clause

  • Full(two sided)outer join

  • Group Discussion

Unit test 2

7. Using Subqueries to Solve Queries : 

  • Arbitrary join conditions for outer joins

  • Single row subquery

  • Multirow subquery(IN, ANY,ALL)

  • Null Values in a subquery

  • Group Discussion

8 . Using the Set operators : 

  • Set Operators

  • Union, Union All, Minus, Intersect

  • Group Discussion

9. Manipulating data :

  • DML(insert, update, delete)

  • DDL(Truncate)

  • DCL(commit, Rollback, Savepoint)

  • Group Discussion

10. Using DDL Statements to Create and Manage table : 

  • Database objects

  • Create Table

  • Referencing Another User’s table

  • Default option, data types, including constraints

  • Constraint Guideline

  • NOT NULL constraint

  • Unique Constraint

  • Primary Key, Foreign Key

  • Check Constraint

  • Violating Constraint

  • Create table using subquery

  • Alter table, drop table

  • Group Discussion

Unit test 3

11. Create other schema objects :

  • View(simple, complex view)

  • Rules of view with example

  • Using with check option

  • Denying DML operations

  • Drop View

  • Sequence

  • next val, currval, modifying sequence

  • drop sequence


  • Create index, Index Guideline, drop index

  • Synonyms

  • Create and remove synonyms

12. Managing objects with Data Dictionary views :

  • The Data Dictionary

  • Data Dictionary Structure

  • How to use the Dictionary views

  • User_objects and all_objects

  • Table,column, constraint, view, sequence, synonyms information

  • Adding comments to a table

  • Group Discussion

13. Controlling UserAccess :

  • Privileges(system level, object level)

  • Create, User, grant, revoke privilige

  • assign tablespace to user, create Role

  • Group Discussion

14. Managing Schema Objects :

  • Alter Table, modify column, Drop Column,

  • rename table name, column name

  • Drop table, set unused, adding droping deleting constraint

  • Enabling/disabling constraint

  • Create index with the create table

  • Function based index

  • Drop index

  • Drop table, purge table, recycle bin

  • Group Discussion

Unit test 4

15. Manipulating large data sets :

  • Using Subquery to manipulate data

  • Copying rows from another table

  • Updating columns with subquery

  • Updating rows Based on another table

  • Deleting rows based on another table

  • With check option on DML statements

  • Types of multiple insert

  • Multiple insert

  • Unconditional insert all

  • Conditional insert all

  • Conditional insert first

  • Pivotig Insert

  • Merge Statement

  • Tracking Changes in Data

  • Flashback version query

  • Version between clause

  • Group Discussion

16 . Generating Reports by grouping related data : 

  • Rollup, Cube, Grouping Function, Grouping Set

17. Managing data in Different TimeZones : 

  • TimeZone, TimeZone session parameter

  • current_date, current_timestamp, localtimestamp, dbtimestamp, sessiontimezone, timestamp datatype

  • Diff between date and time stamp

  • Timestamp with time zone data type

  • Timestamp with local time zone

  • Interval datatype

  • Group Discussion

Unit test 5

18. Retrieving Data Using Subqueries :

  • Multiple Column Subqueries

  • Column Comparison

  • Pairwiaw and Nonpariwise subquery

  • Scalar subquery

  • Correlated Subqueries

  • Exists Operator

  • Correlated Update/Delete

  • Group Discussion

19. Hierarchical Retrieval : 

  • Sample data from employees table

  • Natural tree structure

  • Hierarchical Queries

  • Walking the tree

  • Walking the tree from the Bottom up

  • Walking the tree from the TopDown

  • Ranking rows with the level Pseudocolumn

  • Formating Hierarchical Reports Using LEVEL and LPAD

  • Pruning Branches

  • Group Discussion

20. Regular Expression Support :

  • Regular Expression: Overview

  • Meta Characters

  • Regular Expression Functions

  • REGEXP Function Syntax

  • Performing Basic Searches

  • Checking the presence of a pattern

  • Example of extracting substrings

  • Replacing patterns

  • Regular Expressions Check Constraints

  • Group Discussion

UNIT Test 6


Just join in and we will make you the best.

Don't be shellfish...Digg thisBuffer this pageEmail this to someoneShare on FacebookShare on Google+Pin on PinterestShare on StumbleUponShare on LinkedInTweet about this on TwitterPrint this pageShare on RedditShare on Tumblr

SQL DBA Training in Pune Will Make You An Expert in Deadlock Topic

SQL DBA training in Pune will make you an expert in Deadlock topic

In a multi-process program, deadlock is an undesirable scenario that occurs in a shared atmosphere, where a procedure consistently stays for a source that is taken by another procedure.

For example, here are a few transactions {T0, T1, T2, …,Tn}. T0 requires a source X to complete its process.

Resource X is occupied by T1, and T1 is waiting for Y patiently, which is occupied by T2.

T2 is in need of source Z, which is occupied by T0.

Thus, all the procedures wait for each other to discharge resources.

In this case, none of the procedures can complete their process.

It is known as a deadlock.

Deadlocks are not healthy for a program. In case a program is trapped in a deadlock, the dealings associated with the deadlock are either returned or re-booted.

Just get to know more and become an expertise in the Oracle Certification path as there are many

Oracle DBA jobs in Pune for freshers.

Deadlock Prevention

To avoid any deadlock scenario in the program, the DBMS strongly examines all the functions, where dealings are about to operate. The DBMS examines the functions and checks if they can make a deadlock scenario.

If it discovers that a deadlock scenario might happen, then that transaction is never allowed to be implemented.

There are deadlock protection methods that use timestamp purchasing procedure of dealings in order to predetermine a deadlock scenario.

Wait-Die Scheme

In this plan, if a deal demands to reserve a resource (data item), which is already organised with a inconsistent reservation by another deal, then one of the two opportunities may occur −

If TS(Ti) < TS(Tj) − that is Ti, which is inquiring an inconsistent lock, has a greater than timestamp of Tj − then Ti is able to hang about until the data-item is available.

If TS(Ti) > TS(tj) − that is Timestamp of Ti is lesser than Tj − then Ti passes away. Ti is re-booted later with a random delay but with the same timestamp.

Thisdesign allows the older transaction to hold on but destroys the young one.

Wound-Wait Scheme

In this plan, if a deal demands to commit a resource (data item), which is already organised with inconsistent security by some another deal, one of the two opportunities may happen −

If TS(Ti) < TS(Tj), then Ti makes Tj to come back− that is Ti wounds Tj. Tj is re-booted later with a random delay but with the same timestamp.

If TS(Ti) > TS(Tj), then Ti needs to hang about until the source is available.

This plan, allows the young deal to wait; but when a more older deal demands a resource held on by a young one, the older deal causes the younger one to abort and release the item.

In both the cases, the deal that goes into the program at a later level is aborted.

Deadlock Avoidance

Aborting a deal is not always a realistic strategy.

Instead, deadlock prevention systems can be used to identify any deadlock scenario ahead of time. Methods like “wait-for graph” are available but they are compatible with only those systems where dealings are light and portable having less circumstances of resource. In a heavy program, deadlock protection methods may work well.

Oracle training on demand, dont miss the chance to be a master in it.

Wait-for Graph

This is a simple method available to monitor if any deadlock scenario may occur. For each deal coming into into the program, a node is made. When a deal Ti demands for a lock on an item, say X, which is taken by some other deal Tj, a direct connection is made from Ti to Tj.

If Tj releases item X, the connection between them is removed and Ti locks the data item.

The program preserves this wait-for chart for every deal patiently awaiting some information items held by others. The program keeps verifying if there’s any cyclicpattern in the chart.

Here, we can use any of the two following methods −

  1. First, do not allow any demand for an item, which is already closed by another deal. This is not always possible and may lead to starvation, where a deal consistently stays for a data item and can never obtain it.
  2. The second option is to roll back one of the dealings. It is not always possible move back the young deal, as it may be important than the old one.With the help of some comparative criteria, a deal is selected, which is to be aborted. This deal is known as the victim and the procedure is known as victim choice.
Don't be shellfish...Digg thisBuffer this pageEmail this to someoneShare on FacebookShare on Google+Pin on PinterestShare on StumbleUponShare on LinkedInTweet about this on TwitterPrint this pageShare on RedditShare on Tumblr

Make Your Oracle Careers in Testing using SQL

Make your oracle careers in testing using SQL


There are some general test requirements issues for SQL server back-end testing and also for providing test methodology which has test design and that is what this article is all about.

Forecast LRS, Delta, KENAI, KBATS and so on are the systems that are designed by ITG have client-server architectures. Completely tested projects are only with back end and they are only few in number.

1.1 Importance of back end assessments

Any client or server system is called back end. If there are problems with the back end then it may lead to dataloss, system deadlock, corrupt data, and bad efficiency. Single SQL server are logged on by various front end systems. The whole program will collapse if there is a small bug in the back end of the system. It will cost you more if there are many bugs in the back end

It is clear that various tests done in the front end doesnt affect much of back end and for back end there is a need for direct testing.

Benefits for back end testing:

Testers need not worry about the back end and it is not a black box. They have in depth control of test coverage and detail. Many bugs can be effectively found and corrected in the beginning of development stage.

If you consider Forecast LRS as an example; the amount of bugs in a back-end was more than 30% of count of bugs in the venture.

When back-end bugs are set, the program top quality is considerably increased.

1.2 Variations between back-end testing and front end testing

It is not easier to know and check a back-end than a front end end because a front side end this is because of user friendly interfaces.

Tables, saved procedures and triggers are the objects that back end has. Data reliability and protection is important.

There are some big problems like multiuser and performance. The project’s future needs operation that are slow and it is vital to be so.

There is hardly any testing tool for back end and SQL is one of the testing tool which is widely used. MS Access and MS Excel can be used to confirm information but they are not perfect for testing.

But on the contrary there are wide range of tools for front end testing.

For back end testing, the professional must be an expertise in SQL. So please the Oracle course in Pune and be an Oracle Certified Professional.

The professional must know the balance between SQL Server and SQL testing and thus there are

not many testers available.

1.3 Back end testing phases

Let us look at the various stages of back end testing.

  1. Requirements gathering for design patterns for an SQL.
  2. Analyze the requirements of the design.
  3. Application of testing in this pattern with SQL Query.
  4. The information concerning component testing (individual components of the system), regression testing (previously known bugs), integration testing (several items of the program put together), and then the entire program (which will include both front end and back ends).

In the development cycle, an early stage, component testing will be done. After the component testing, integrating and program testing will be initiated.

Throughout the project, regression testing will be done.

There is no independent testing for back end as it is governed by the front end

Final stage is quality product delivery.

1.4 Back again end testing methodology

There are things common between back end testing and front end testing and API testing.

Many testing techniques can be used for back-end testing.

Functional testing and Structural testing are the more effective techniques at the back end testing.

They are combined in some test cases.

This testing may find more bugs and therefore it is recomended for the testers to do both the testing.

And to be a tester you should join a dba institute now.

Back end testing has various options of testing and here is a list of them:

Functional testing:

A back-end can be split into a limited number of testable items centered on application’s efficiency.

Functionality and input will be the test focus and not implementation and structure. Different tasks may have different ways to split down.

Boundary testing:

There are many columns with boundary conditions.

You can consider the column percentage where the range is between 0 and 100.

This testing will be used for such boundary analysis.

Stress testing:

As the name says, heavy data is to be submitted. For instance there are many users who use the same table to access loads of data and in such cases repeated stress test is required.


There are some test places that will cover major test specifications but not all the databases are the same.

There are three different categories based on structure of SQL database:

Database Schema

Stored Procedure


Schema comprises of codes, database design, tables, table columns, column types, keys, indices, defaults. Saved Procedures are designed on the top of a SQL database.

Front end communicates to API in DLL. Stored Procedures are used for communication in SQL database. Stored Procedures are database. Triggers are also a kind of stored procedures.

Following are the structural back end test:

2.1 Database schema testing

2.2 Stored procedure tests

2.3 Trigger tests

2.4 Integration tests of SQL server

2.5 Server setup scripts


As said earlier, fucntionality and features are the prime focus for this testing. Each venture has different test cases.

There are many things common in project.

The following, speaks about the most similar things. Project specific test cases are to be added in the functional test design.

It is not a wise decision to analyze a server information source as a single entity at initial stage.

We have to split it into functional segments.

If we cannot do the partition, either we do not know that venture deep enough or the design and style is not modulized well.

How to split a server information source is essentially dependent on the project features.


Asking for project features.

For each major feature, pick up portion of schema triggers and saved procedures that apply the function and make them into a functional team.

Each team can be examined together. For example, the Forecast LRS venture had four services: forecast, product lite, reporting, and system. This was the key for functional partitioning:


If the boundary of functional groups in a back end is not apparent, we may watch information flow and see where we can examine the data:

Begin from the front end.

When a service has a demand or saves data, some saved procedures will get called.

Table updates will take place. Those saved procedures will be the starting point testing and those tables will be the best spot to evaluate and analyze results.

Following are the functional back end testing:

3.1 Test functions and features

3.2 Checking data integrity and consistency


3.3 Login and user security

3.4 Stress Testing

3.5 Test a back end via a front end

3.6 Benchmark testing

3.7 Common bugs

For more information join the best oracle training.



Don't be shellfish...Digg thisBuffer this pageEmail this to someoneShare on FacebookShare on Google+Pin on PinterestShare on StumbleUponShare on LinkedInTweet about this on TwitterPrint this pageShare on RedditShare on Tumblr

2 Options for Query Optimization with SQL

Operating with SQL Server is always a task. As designers try to repair SQL Server efficiency issues, the first thing that take is to look at the concerns. This is common phase and most essential for most designers. Developers love these difficulties of marketing because they can get the highest possible noticeable efficiency developments in their surroundings. These actions provide them with the highest possible visibility–even in their organizations–when they are problem solving client issues. In this short article, let me take a cut at two ideas of question marketing that are available for SQL Server. These are methods invisible within SQL Server which are essential to note.


SQL Server 2005 included the OPTIMIZE FOR sign that permitted a DBA to specify an actual value to be used for the purpose of cardinality evaluation and marketing. If we have a desk with manipulated details submission, OPTIMIZE FOR could be used to optimize for a plain value that offered affordable efficiency for a number of parameter principles. While the efficiency may not be the best for all factors, it is sometimes much better have a regular performance time instead of having an idea that did a search for in one situation (for a parameter value that was selective) and a check out for another situation (where the parameter value is very common), based on the value approved during the preliminary collection.

Unfortunately, OPTIMIZE FOR only permitted literals. If the varying was something like a datetime or purchase number (which by their characteristics usually be improving over time), any set value that you specify will soon become out of time frame and you must modify the sign to specify a new value. Even if the parameter is something whose sector continues to be relatively fixed eventually, that you must provide a actual indicates that you must research and discover a value that is an excellent “general purpose” value to specify in the sign. Sometimes this is or challenging right.

Ultimately, providing an OPTIMIZE FOR value impacts strategy choice by modifying the cardinality reports for the predicate using that parameter. In the OPTIMIZE FOR sign, if you provide a value that does not are available or is irregular in the histogram, you slow up the approximated cardinality; if you provide a typical value, then you improve the approximated cardinality. This impacts price and eventually strategy choice.

If all you want to do is choose an “average” value and you don’t good care what the value is, the OPTIMIZE FOR (@variable_name UNKNOWN) sign causes the optimizer to disregard the parameter value for the purpose of cardinality evaluation. Instead of using the histogram, the cardinality calculate will be produced from solidity, key details or set selectivity reports based on the predicate. This outcomes in a foreseeable calculate that doesn’t need the DBA to regularly have to keep track of & modify the value to sustain reliable efficiency.

A difference of the format informs the optimizer to disregard all parameter principles. You simply specify OPTIMIZE FOR UNKNOWN and bypass the parenthesis and varying name(s). Specifying OPTIMIZE FOR causes the ParameterCompiledValue to be left out from the showplan XML outcome, just as if parameter smelling did not occur. The resulting strategy will be the same regardless of the factors approved, and can provide more foreseeable question efficiency.


There are some circumstances where the group might point to using a track banner as a workaround for a question plan/optimizer issue. Or they may also discover that limiting a particular optimizer concept stops a particular issue. Some track banners are common enough that it challenging to calculate whether switching the track banner on is an excellent common remedy for all concerns or whether the issue is likely particular to the particularly question which was examined. In the same way, most of these optimizer guidelines are not naturally bad and limiting on the program as a whole is likely to cause a efficiency deterioration somewhere else.


As we summary your weblog, it is essential to know when to use these choices of question marketing or question adjusting methods in your surroundings. Please assess on a case-by-case foundation and do enough examining before using them. I am sure the studying will never quit as we discover the next editions of SQL Server being full of plenty of extra functions. Upcoming weblogs will talk about many of these additions. You can join the dba certification course in Pune for getting the best oracle training.

Don't be shellfish...Digg thisBuffer this pageEmail this to someoneShare on FacebookShare on Google+Pin on PinterestShare on StumbleUponShare on LinkedInTweet about this on TwitterPrint this pageShare on RedditShare on Tumblr

3 Things To know about SQL Server TempDB and Performance

SQL Server has four system data base automatically and one of them is known as TempDB. TempDB is used for many functions, such as user-created short-term things, internal short-term things and edition stores and certain features like online re-indexing, several active record sets (MARS) and others. Since TempDB is distributed across all data base and all relationships in SQL Server, it might become a point of argument if not designed correctly. This article will cover a few important performance-related facts about TempDB.

First, let’s review a few fundamentals. The name of the information source describes the purpose, but we need to keep in mind that this information source is regenerated whenever the SQL Server service is started. Few DBAs use the TempDB information source development date as the “start time” for SQL Server. Here is the query:

Now that we have covered the fundamentals, let us advance with three things you should look out for.

Tip 1: Keep TempDB on Regional generate in Cluster

This was a function presented in SQL Server 2012. Generally, in a grouped example of SQL Server, information source information files are stored in shared storage (SAN). In SQL Server 2012 and later, however, we can keep TempDB on local connected pushes. As said before, TempDB information source is distributed across a whole example and hence the IO efficiency of this information source is very critical.

With quicker pushes like SSD and FusionIO cards, there’s been an increased interest in maintaining TempDB on those pushes in situation of group also. Microsoft has heard this reviews and allowed SQL Server to keep TempDB information files in local generate, when it comes to a group. One benefit of putting TempDB on any local hard drive is that it makes individual routes of IO traffic by having other information source information files on the SAN and TempDB information files on a nearby hard drive. By using a PCIe SSD or traditional hard generate drive SSDs, the IO functions performed on TempDB will avoid HBAs. This provides better efficiency for TempDB functions and stops argument on a distributed storage area network or array.

Another benefit of this function is to save cost. Assume that we set up a multisite, geographically allocated group. This means the SAN would be duplicated from one location to another, maybe few kilometers or many kilometers apart. If the TempDB is kept on SAN, it would also be duplicated and as described previously, it’s a scratchpad kind of information source for SQL Server. Keeping information files on local pushes would mean better data transfer useage usage and quicker failovers.

We just need to make sure that the same local path prevails on all nodes of SQL Server.

Tip 2: Set up for several DATA Files

When there are several information in a information source, all the makes to the information source are candy striped across all information files based on the percentage of 100 % free area that the data file has to the total 100 % free area across all of the information files. Now, each of the information has its own set of allowance webpages (called PFS, GAM, and SGAM pages) so as the makes change from data file to data file the website percentage occur from different allowance bitmap webpages, growing the work out across the information files and reducing the argument on any individual website.

The general recommendation is that it should be equivalent to sensible processer chips, if less than 8 else configure it to 8 information files. For example, if we have a dual-core processer, then set the number of TempDB information equivalent to two. If we have more than 8 cores, start with 8 information files and add four at a moment as needed. We also need to ensure that the initial size and auto-growth configurations for ALL TempDB information are designed in the same way.

Tip 3: Consider track banner 1117 and 1118

These are two track banners which are useful to avoid argument in TempDB information source. The most common track banner is 1118 which stops argument on the SGAM webpages by a little bit changing the allowance criteria used. When track banner 1118 is allowed, the allowance in TempDB are changes from a single website at a moment from a mixed level (8 times) to spend a degree of 8 webpages. So when there are several temperature platforms development in TempDB information source, allowance bitmap argument would be reduced.

Less well known, track banner 1117 changes the auto-grow criteria in SQL Server. It is always recommended to personally develop the information. This is because when SQL Server works auto-grow of information, it is done one computer data file at a moment in a round robin the boy wonder fashion. When this happens, SQL Server auto-grows the first data file, makes to it until it is filled and then auto-grows the next computer data file. If you noticed, the proportionate fill is damaged now. When track banner 1117 is allowed, then when SQL Server has to perform auto-grow of a computer data file, it auto-grows all of the information files simultaneously. You can join the  sql training in Pune for oracle course to make your career in this field.

Don't be shellfish...Digg thisBuffer this pageEmail this to someoneShare on FacebookShare on Google+Pin on PinterestShare on StumbleUponShare on LinkedInTweet about this on TwitterPrint this pageShare on RedditShare on Tumblr

What is impressive about the back up of SQL 2016?

Every launch of SQL Server delivers many new capabilities–and possibilities for something new to learn. SQL Server 2016 is nothing new in this regards. Along with many other additional functions in SQL Server 2016, Microsoft company has spent improvements to back-up. After discovering these improvements further, I believe these functions are very much of a reasoning enabler.

Managed Backup – Enhancement

Managed back-up has been around since SQL Server 2014, and permitted an automated data base back-up to Microsoft Azure, depending on changes done in the database. This function plans, works and preserves the backups–all a DBA needs to do is specify a preservation period. In SQL Server 2014, there was not much of control for regularity. In SQL Server 2016, this has been enhanced:

  1. System data source can be supported up.

  2. Backup of data source in simple restoration model is possible.

  3. A back-up routine can be personalized depending on business need rather than log utilization.

There are many things added in MSDB data source in SQL Server 2016 to manage them. They are located under new schema known as managed_backup. We can run the below question to find all things under the new schema.

All the performance of handled back-up is done by SQL Server Broker so it’s essential to make sure that SQL Broker is set to start instantly.

Backup to Azure – Enhancement

As of this writing, Microsoft Azure provides four type of storage:

  1. Block blobs

  2. Page Blobs
  3. Disks, Tables and Queues
  4. Files

SQL Server 2014 let you take a back-up of Website blobs. If we look at the monthly storage space price for prevent blobs, it cost less than page blobs. In SQL Server 2016, you can now take back-ups on prevent blobs.

It is worth noting that the page blog has a restrict of 1 TB, while a prevent blob is restricted to 200 GB. Does this mean we can’t take back-up more than 200 GB? No, we are permitted to take candy striped back-ups and we can divide the computer file into several prevent blobs. The highest possible restrict of lines is 64, so now we can back-up beyond the earlier restrict of 1 TB to 12.8 TB (64*200GB)

The control to take back-up of data source is “backup … to url” so it is also known as backup2url.

File-Snapshot Backups

SQL Server 2016 now has a File Overview back-up function available for data source that are saved in the Azure Blob Store. This function would significantly help the SQL Server running on an Azure Exclusive machine. It uses the Windows Azure blob snapshot performance to take a back-up of the data source.

This function can ONLY be used if the information of the data source are living in Azure blob storage space. Here is the mistake concept which is brought up if we try to take FILE_SNAPSHOT back-up for a regular data source for which information are living on local hard drive. Thus you can join the oracle dba course available in the dba institute in Pune.

Don't be shellfish...Digg thisBuffer this pageEmail this to someoneShare on FacebookShare on Google+Pin on PinterestShare on StumbleUponShare on LinkedInTweet about this on TwitterPrint this pageShare on RedditShare on Tumblr

How does SQL tuning practices help to increase database performance?

With the added complexness of growing details amounts and ever changing workloads, database efficiency adjusting is now necessary to increase source utilizations and program efficiency. However, efficiency adjusting is often easier said than done.

Let’s face it, adjusting is difficult for several of reasons. For one thing, it requires a lot of expertise to be able to comprehend performance programs, and often upgrade or re-write good SQL. On top of that, adjusting is usually very difficult. There will always be a large volume of SQL statements to go through, which may lead to doubt around which specific declaration needs tuning; and given every declaration is different, so too is the adjusting approach.

As details amounts grow and technology becomes increasingly complicated, it is becoming more essential to tune data base properly to deliver end-user experience and to lower facilities expenses. Performance adjusting can help details source professionals quickly recognize bottlenecks, focus on inadequate functions through overview of question performance programs and eliminate any wondering games.

Regardless of the complexness or capability, the following MySQL efficiency adjusting tips will serve as a step-by-step guide to fixing MySQL efficiency issues and help relieve the pain points that often go along with efficiency adjusting.

Gather Guideline Metrics

Effective details collection and research is essential for determining and fixing efficiency issues. That said, before efficiency adjusting starts, it is significant to set objectives for how lengthy the procedure will take, as well as knowing how lengthy the question should run in a perfect world, whether it be 1 second, 10 minutes or one hour.

This stage should include collecting all of your baseline analytics, such as series analyzed and series sent, and record how lengthy the question is running now. It’s also necessary to collect delay and line declares, such as program prevents, delivering details, determining research and writing to the network. These delay declares give great signs on where to concentrate adjusting initiatives.

Examine the Execution Plan

Developing a performance program’s vital as you work to make a plan for question efficiency. Fortunately, MySQL offers many ways to choose a performance strategy and simple routing to look at the question. For example, to gain tabular opinions of the program, use EXPLAIN, Describe EXTENDED, or Optimizer Track.

For a more visual perspective and additional understanding of the expensive actions of an performance strategy, use MySQL Work bench. Efforts list out actions from top to bottom, select type, desk names, possible important factors to focus on, key length, reference and the amount of series to read. Also, the “extra columns” will provide you with more details about how it’s going to filter, type and access the details.

Review the Table and Index

Now that the analytics have been collected and the performance strategy has been analyzed, it’s a chance to evaluate the desk and catalog details in the question, as these details could eventually inform your adjusting strategy. To begin with, it’s essential to know where the platforms reside and their sizes. Also, evaluate the important factors and restrictions to see how the platforms are related. Another area to concentrate on is the dimensions and makeup of the content – especially in the “where” stipulation.

A little technique you can use to get the dimensions of the platforms is to use the declaration “mysqlshow –status <dbname>” at the control line. Also using the “show catalog from <table_name>” declaration is helpful to check on the spiders and their cardinality, as this will help drive the performance strategy. Especially, recognize if the spiders are multi-column and what purchase those content fall within the catalog. MySQL will only use the catalog if the left-leading line is recommended.

Consider SQL Diagramming

After collecting and examining all of these details, it’s a chance to finally begin adjusting. Often, there may be so many possible performance routes to settle a badly performing question that the optimizer cannot analyze them all. To prevent this, a useful technique is SQL Diagramming, which provides a perspective of the problem in past research to help the receiver eventually find a better performance direction than the optimizer. SQL diagramming can also be applied when adjusting to help reveal insects within a complete question. Many periods, it’s confusing why the optimizer is doing what it’s doing, but SQL diagramming allows make a better direction to the problem, which can save companies from expensive errors.

Effective Tracking for MySQL Tuning

Monitoring can easily be neglected, but it is a vital help guaranteeing the problem within the details source is settled – and stays settled. After adjusting, it’s essential to continue to observe the improvements created. To do this, make sure to take new measurement dimensions and compare to the initial numbers to prove that adjusting created a difference. Following a ongoing monitoring procedure, it’s necessary to observe for the next adjusting opportunity, as there’s always room for improvement.

Identify MySQL Bottlenecks with Response-Time Analysis

If there are program slow-downs, and your end-users are stressing, you need to get to the root-cause of the problem – and fast. Traditional MySQL efficiency monitoring resources track source analytics and concentrate on server health.

Response-time research resources are different because they concentrate time, not on source analytics – the research is based on what the program details source motor are waiting for, which is taken in MySQL stays. Response-time research is an efficient way to settle complicated efficiency issues by looking at where the details source motor is hanging out. It goes beyond determining question performance periods or slowly concerns to determine what exactly is causing a question to be slowly.

Response-time research resources, such as DPA, go beyond showing delay periods or components metrics– they link wait-times with concerns, reaction time, resources, storage space efficiency, performance programs and other dimensions to provide you with the ability to know what goes on inside your details source and what is reducing down efficiency.

The Benefits of Performance Tuning MySQL

Understanding what pushes efficiency for your details source allows you to website by right-sizing your web servers and avoiding over-provisioning. It also can help you comprehend if moving to flash storage space, or adding server capacity, will improve efficiency, and if so, how much.

As with much in IT, details source efficiency adjusting is not without its difficulties. However, adjusting turns out to be beneficial as it can provide companies more hit for the money, rather than simply tossing more components at the problem.

Remember: MySQL adjusting is an repetitive procedure. As details develops and workloads change, there will always be new adjusting opportunities. The best oracle training always provide the oracle certification courses.

Don't be shellfish...Digg thisBuffer this pageEmail this to someoneShare on FacebookShare on Google+Pin on PinterestShare on StumbleUponShare on LinkedInTweet about this on TwitterPrint this pageShare on RedditShare on Tumblr