Tag Archives: .net

How To Deal Camel Casing In ASP.NET Core Web API

If you worked with Web API in ASP.NET Core, you must have noticed that while data serializing for the client, the ASP.NET Core Web API uses camel casing. In other words, if your server side C# class is like this as shown:

How To Deal Camel Casing In ASP.NET Core Web API

[Table (“Employees”)]

Public class Employee

{

[DatabaseGenerated(DatabaseGeneratedOption.Identity)]

[Required]

public int EmployeeID { get; set; }

[Required]

public string FirstName { get; set; }

[Required]

public string LastName { get; set; }

[Required]

public string City { get; set; }

}

Then post JSON serialization a sample Employee object will look like this:

{

“employeeID”:1,

“firstName”:”Nancy”,

“lastName”:”Davolio”,

“city”:”Seattle”

}

You will dee that all the property names will get converted to their camel case equivalents (EmployeeID to employeeID, FirstName to firstName and so on).

This default nature doesn’t post many problems if the client is a C# application (HttpClient based) but if the client is a JavaScript client you might need to change the behaviour. Even with JavaScript clients, the camel casing might not pose a problem in several cases. This is because camel casing is used a lot in JavaScript world. Mainly if you are using JS framework chances are that you will be using camel casing during data binding and similar things.

At times, you might want to preserve the casing of the original C# property names. Suppose you are moving out or reusing a piece of JavaScript code that uses the same casing as the C# class. There you might want to prevent that code from breaking. This needs that JSON serialization of ASP.NET Core to preserve the casing of an underlying C# class.

Although the default nature is to make use of camel casing, you can change that to preserve the original casing. This is how…

Open the Startup class of Web API application and look for the ConfigureServices() method. Currently you have the following call:

services.AddMvc();

Change that line to this:

services.AddMvc()

.AddJsonOptions(options =>

options.SerializerSettings.ContractResolver

= new DefaultContractResolver());

The above code makes use of AddJsonOptions() extension method. The AddJsonOptions() method next specifies the ContractResolver property to an instance of DefaultContractResolver class.

To mention, for the above code to compile correctly you must do the following:

Bring in Newtonsoft.Json.Serialization namespace

Sum up NuGet package for Microsoft.AspNetCore.Mvc.Formatters.Json

If you run the application next, you must see the JSON data.

There you will see that the camel casing is removed. Remember that DefaultContractResolver stores whatever is the casing of the C# class. It doesn’t automatically alter it to pascal casing.

You will get a clear picture here.

Observe the casing of the properties. The underlying C# class gets modified to use the casing. The same casing is stored during JSON serialization.

What if you need to exactly specify that you want camel casing? It’s quite simple. Only make use of CamelCasePropertyNamesContractResolver class. The following code shows how to do:

services.AddMvc()

.AddJsonOptions(options =>

options.SerializerSettings.ContractResolver

= new CamelCasePropertyNamesContractResolver());

Now you set the ContractResolver to a new example of CamelCasePropertyNamesContractResolver class. This will make use of camel casing during JSON serialization.

With this we conclude. Hope the discussion was helpful for you. Keep coding!

If you want to enhance yourself in Dot Net Course and improve yourself through Dot NET training program; our institute would be of great help and support. We offer a well structured program for the Best .Net Course. Among many reputed institutes of dot net training and placement in Pune, CRB Tech has created a niche for itself.

Stay connected to CRB Tech for your technical up-gradation and to remain updated with all the happenings in the world of Dot Net.

Don't be shellfish...Buffer this pageEmail this to someoneDigg thisShare on FacebookShare on Google+Share on LinkedInPrint this pageShare on RedditPin on PinterestShare on StumbleUponTweet about this on TwitterShare on Tumblr

Registering Custom Directories For Views In ASP.NET MVC

In ASP.NET MVC, when applications are created, the Views stay in Views directory for the Controller actions. For instance, by default it develops a Home controller with Index action, and if you watch in Solution Explorer in Views directory you get to see directory Views, Home, next Index.cshtml and get its action as shown below:

Registering Custom Directories For Views In ASP.NET MVC

publicclassHomeController: Controller

{

public ActionResult Index()

{

return View();

}

}

Views

As of obvious, it will first find Index.cshtml file in Views/Home folder and if it cannot find it there then it will find in View/Shared folder. If it do not find even there, then an exception will be thrown that view file is not found. Here is the exception text which is thrown:

The view ‘Index’ or its master were not found or not a single view engine supported the locations. The following locations were searched:

~/Views/Home/Index.aspx

~/Views/Home/Index.ascx

~/Views/Shared/Index.aspx

~/Views/Shared/Index.ascx

~/Views/Home/Index.cshtml

~/Views/Home/Index.vbhtml

~/Views/Shared/Index.cshtml

~/Views/Shared/Index.vbhtml

The same is the case for a partial view when you call return PartialView(), it first sees in the respective controller’s Views/Home directory in the case of HomeController and in a case of failure it finds in the View/Shared folder.

Now what if you made a separate directory for partial views in my Views folder and Shared folder like.

Views/Home/Partials and Views/Shared/Partial next you have to tell the ViewEngine to look in that directory also by writing the below mentioned code in Gloabl.asaxfileinApplication_Startevent.

Suppose there is a code and you return _LoginPartial.cshtml from Index action of the HomeController, now what will happen it will look in View/Home directory first and in failure it will look in View/Shared, but this time we have my partial views in separate directory named Partial for every controller and for shared as well, In this case HomeController partial views reside in the Views/Home/Partials and in Views/Shared/Partials:

publicclassHomeController: Controller

{

public ActionResult Index()

{

return View();

}

}

In this case, as well you will get the same exception as Engine will not be able to find the View file _LoginPartial.cshtml.

The beauty of ASP.Net MVC framework is the extensibility which you can perform according to your personal and business needs, one of them is that if you want to have your own directories structure for organizing your views you can enrol those directories with razor view engine, which will make your life little easy as you will not have to specify fully qualified path of the view, as razor knows that it needs to look for the view in those directories as well which you have registered with it.

What you need to do is to register this directory pattern in the application so that every time you call any View it must look in the directories as well in which you placed the View filesHere is the program for that

publicclassMvcApplication: System.Web.HttpApplication

{

protectedvoidApplication_Start()

{

AreaRegistration.RegisterAllAreas();

WebApiConfig.Register(GlobalConfiguration.Configuration);

FilterConfig.RegisterGlobalFilters(GlobalFilters.Filters);

RouteConfig.RegisterRoutes(RouteTable.Routes);

BundleConfig.RegisterBundles(BundleTable.Bundles);

AuthConfig.RegisterAuth();

RazorViewEnginerazorEngine = ViewEngines.Engines.OfType < RazorViewEngine > ().FirstOrDefault();

if (razorEngine != null)

{

varnewPartialViewFormats = new []

{

“~/Views/{1}/Partials/{0}.cshtml”,

“~/Views/Shared/Partials/{0}.cshtml”

};

razorEngine.PartialViewLocationFormats = razorEngine.PartialViewLocationFormats.Union(newPartialViewFormats).ToArray();

}

}

}

So when you will call return PartialView, it will look for Controller Views directory’s subdirectory named Partials as well and in case it not finds there it will look in both Views/Shared and Views/Shared/Partials.

In a similar way you could enrol the other directories or your Custom directory structure if you wish, and doing this way you won’t require specifying complete path for like return View(“~/Views/Shared/Paritals/Index.cshtml”), rather you can write then return View() if you want to load Index View and your action name is also Index which is being called upon, or if you want some other view to be provided or if any other action is called on and you want to return Index view next you can write return View(“Index”).

We conclude now. Keep coding!

If you want to enhance yourself in Dot Net Course and improve yourself through Dot NET training program; our Institute CRB Tech Solutions would be of great help and support. We offer well structured program for the Best .Net Course. Among many reputed institutes of dot net training and placement in Pune, CRB Tech has created a niche for itself.

Stay connected to CRB Tech for your technical up-gradation and to remain updated with all the happenings in the world of Dot Net.

Don't be shellfish...Buffer this pageEmail this to someoneDigg thisShare on FacebookShare on Google+Share on LinkedInPrint this pageShare on RedditPin on PinterestShare on StumbleUponTweet about this on TwitterShare on Tumblr

The Need Of Correlation IDs In ASP.Net

Correlation IDs is becoming a very common requirement when there is the concept of “microservices” which communicates over HTTP.

Correlation ID's ASP.Net

Why is there a need of correlation ID?

A problem develops in having multiple services separated into units is how we track the flow of a single user request through each of the individual services that might be involved in generating a response.

This is essential for logging and finding faults with the system. Once the request leaves the first API service and sends the control onto the backend API service, you begin to struggle to read the logs.

A failure there will impact the front end API service but those could appear to be two unrelated errors. We need a way to see the entire flow from the time when a request hits the front end API, via the backend API and back again.

Solution

Here comes in the concept of correlation IDs into play. A correlation ID is a unique identifier which is passed through the entire request flow and is later passed between the services.

When each service requires logging something, it could include this correlation ID, hence ensuring that we can track a full user request from start to finish.

Correlation ID Options

This middleware is quite simple and straight and has only two options.

public class CorrelationIdOptions

{

private const string DefaultHeader = “X-Correlation-ID”;

/// <summary>

/// The header field name where the correlation ID will be stored

/// </summary>

public string Header { get; set; } = DefaultHeader;

/// <summary>

/// Controls whether the correlation ID is returned in the response headers

/// </summary>

Public bool IncludeInResponse { get; set; } = Yes;

}

Both sending and receiving API need to use a theame header name to make sure that they can locate and pass on the correlation ID. Let this be a default to “X-Correlation-ID” which is a standard name for this type of header.

The second option decides whether the correlation ID is included in the HttpResponse.

Middleware

The main part of the solution is a piece of middleware and an extension method to give an easy way to register the middleware. The code structure follows the patterns defined as part of the ASP.NET Core. The middleware class looks like this:

public class CorrelationIdMiddleware

{

private readonly RequestDelegate _next;

private readonly CorrelationIdOptions _options;

public CorrelationIdMiddleware(RequestDelegate next, IOptions<CorrelationIdOptions> options)

{

if (options == null)

{

throw new ArgumentNullException(nameof(options));

}

_next = next ?? throw new ArgumentNullException(nameof(next));

_options = options.Value;

}

public Task Invoke(HttpContext context)

{

if (context.Request.Headers.TryGetValue(_options.Header, out StringValues correlationId))

{

context.TraceIdentifier = correlationId;

}

if (_options.IncludeInResponse)

{

// apply the correlation ID to the response header for client side tracking

context.Response.OnStarting(() =>

{

context.Response.Headers.Add(_options.Header, new[] { context.TraceIdentifier });

return Task.CompletedTask;

});

}

return _next(context);

}

}

The constructor accepts the configuration through the IOptions<T> concept.

The main Impose method, which is then called by the framework, is where the main thing happens.

Firstly, check for an existing correlation ID coming, through a request header. Take an advantage of one of the improvements from C# 7 which says that you can declare the out variable inline, rather than having to pre-declare it prior to the TryGetValue line.

The next block is by choice, which is in control of the IncludeInResponse configuration property. If true you use the OnStarting callback to allow us to safely include the correlation ID header in the response.

Extension Method

The final piece of code that is needed is to include some extension methods to make summing up the middleware to the pipeline easier for consumers. The code looks like the following mentioned here:

public static class CorrelationIdExtensions

{

public static IApplicationBuilder UseCorrelationId(this IApplicationBuilder app)

{

if (app == null)

{

throw new ArgumentNullException(nameof(app));

}

return app.UseMiddleware<CorrelationIdMiddleware>();

}

public static IApplicationBuilder UseCorrelationId(this IApplicationBuilder app, string header)

{

if (app == null)

{

throw new ArgumentNullException(nameof(app));

}

return app.UseCorrelationId(new CorrelationIdOptions

{  Header = header   });

}

public static IApplicationBuilder UseCorrelationId(this IApplicationBuilder app, CorrelationIdOptions options)

{

if (app == null)

{

throw new ArgumentNullException(nameof(app));

}

if (options == null)

{

throw new ArgumentNullException(nameof(options));

}

return app.UseMiddleware<CorrelationIdMiddleware>(Options.Create(options));

}

}

It gives a UseCorrelationId method which is an extension method on the IApplicationBuilder. They will register CorrelationIdMiddleware and if given with handle taking a custom name for the header, or a complete CorrelationIdOptions object which will make sure that the middleware behaves as expected.

Middleware represents an easy way to develop the logic needed by the concept of correlation IDs into ASP.NET Core application.

We conclude now. Keep coding!

If you want to enhance yourself in Dot Net Course and improve yourself through Dot NET training program; our institute CRB Tech Solutions would be of great help and support. We offer well structured program for the best Dot Net Course. Among many reputed institutes of dot net training and placement in Pune, CRB Tech has created a niche for itself.

Stay connected to CRB Tech for your technical up-gradation and to remain updated with all the happenings in the world of Dot Net.

Don't be shellfish...Buffer this pageEmail this to someoneDigg thisShare on FacebookShare on Google+Share on LinkedInPrint this pageShare on RedditPin on PinterestShare on StumbleUponTweet about this on TwitterShare on Tumblr

.Net Core Preview For SDK, Installed runtime & Framework host versions

.NET Core is the high performance implementation of .NET for developing web applications and services that run on Windows, Mac and Linux. It is an open source and it also shares the same code with .NET Framework and Xamarin apps.

On installing .NET Core you notice that you have several pieces that must be fit together; some of them which is optional. In a development environment, its more preferable to have the SDK, which is a development toolkit that has the CLI or command line tool which you make use to create and run core projects with, and other things (for example, language compilers).

 .Net Core Preview For SDK

Please note that when it is mentioned about .NET Core then it’s referring to one of the 3 main .NET runtimes, the other two are .NET Framework and Mono (for Xamarin).

The .NET Core runtime has a number of libraries: the framework libraries, other group of libraries which are the Core CLR, and then there is also the SDK, and the app host (shall discuss later).

To many people, the most confusing part is their different releases. The SDK does not match the runtime version, and you could install different releases of the SDK, and the runtime.

When you download the SDK, the .NET Core runtime of an any specific version is usually installed with it. If you require additional releases you could download them separately.

The mention ‘usually installed with the SDK’ means this depends. Irrespective of whether you download the SDK from the Microsoft downloads site or GitHub, there will be some info somewhere allowing you to know “if” and “which” runtimes come with it. For older downloads you need to take a look at the release notes if it is unclear. You must see something like this:

The .NET Core SDK 1.0.4 includes .NET Core 1.0.5 and 1.1.2 runtimes so downloading the runtime packages separately is not needed when installing the SDK.”

The command line tool that is made used while working with core projects is added to the path when you install SDK, and can be run by making use of the ‘dotnet’ aka for the executable. When you run dot net version you will get the SDK version (!), running dot net will print the host version also. To get the runtime versions which you have downloaded and installed you must look somewhere else.

It is often noticed that there is some confusion regarding, getting the version numbers, and on recent updating of all the services and environments you have to verify that you have the right version of the SDK and runtime on the servers and development machines.

Here is given an example of a function that has been added to one of the PowerShell Core modules for the above purpose:

Function Get-CoreInfo {

if(Test-Path “$env:programfiles/dotnet/”){

try{

[Collections.Generic.List[string]] $info = dotnet

$versionLineIndex = $info.FindIndex( {$args[0].ToString().ToLower() -like “*version*:*”} )

$runtimes = (ls “$env:programfiles/dotnet/shared/Microsoft.NETCore.App”).Name | Out-String

$sdkVersion = dotnet –version

$fhVersion = (($info[$versionLineIndex]).Split(‘:’)[1]).Trim()

return “SDK version: `r`n$sdkVersion`r`n`r`nInstalled runtime versions:`r`n$runtimes`r`nFramework Host:`r`n$fhVersion”

}

catch{

$errorMessage = $_.Exception.Message

Write-Host “Something went wrong`r`nError: $errorMessage”

}

}

else{

Write-Host ‘No SDK installed’

return “”

}

}

We conclude now. Keep coding!

If you want to enhance yourself in Dot Net Course and improve yourself through Dot NET training program; our institute, CRB Tech Solutions would be of great help and support. We offer well structured program for the best Dot Net Course.

Among many reputed institutes of dot net training and placement in Pune, CRB Tech has created a niche for itself.

Stay connected to CRB Tech for your technical up-gradation and to remain updated with all the happenings in the world of Dot Net.

Don't be shellfish...Buffer this pageEmail this to someoneDigg thisShare on FacebookShare on Google+Share on LinkedInPrint this pageShare on RedditPin on PinterestShare on StumbleUponTweet about this on TwitterShare on Tumblr

Defining Custom Logging Messages Using LoggerMessage.Define In ASP.NET Core

To mention, one of the nicest attributes introduced in ASP.NET Core is the universal logging facilities. In this post we shall discuss one of the helper methods in logging library, and how to make use of it inefficiently logging messages in the libraries.

LoggerMessage.Define In ASP.NET Core

Logging Overview

The logging facility is shown as ILogger<T> and ILoggerFactory interfaces, which you can apply into your services using dependency injection to log messages in various ways. For instance, in the following ProductController, a message is logged when the View action is cited.

public class ProductController : Controller

{

private readonly ILogger _logger;

public ProductController(ILoggerFactory loggerFactory)

{

_logger = loggerFactory.CreateLogger<ProductController>();

}

public IActionResult View(int id)

{

_logger.LogDebug(“View Product called with id {Id}”, id);

return View();

}

}

The ILogger could log message at several levels given by the LogLevel:

public enum LogLevel

{

Trace = 0,

Debug = 1,

Information = 2,

Warning = 3,

Error = 4,

Critical = 5,

}

The final reason of the logging infrastructure are logging givers. These are the “sinks” where the logs are written.You could plug in several providers, and write logs on a variety of different locations, for instance the console, to a file, to Serilog etc.

One good thing about the logging infrastructure and the universal use of DI in the ASP.NET Core libraries is that same interfaces and classes are used all over the libraries and also in your application.

How to control logs produced by different categories

When deevlopin a logger with CreateLogger<T>, the type name you write is used to develop a category for the logs. At the application level, you could choose which LogLevels are output for a given category.

For instance, you can mention that by default, Debug or higher level logs are written to the providers, but for logs written by services in the MS namespace, only logs of Warning level or higher are written.

With this method you can control the amount of logging produced by several libraries in your application, enhancing logging levels for only areas which need them.

Logs without filtering

If you notice, you’ll see that most of the logs come from internal components, from classes in MS namespace. It’s nothing but noise. You could filter out Warning logs in the MS namespace, but retain other logs at Debug level:

Logs with filtering

With a default ASP.NET Core 1.X template, you need to change the appsettings.json file, and set loglevels to Warning as appropriate:

{

“Logging”: {

“IncludeScopes”: false,

“LogLevel”: {

“Default”: “Debug”,

“System”: “Warning”,

“Microsoft”: “Warning”

}

}

}

Note: In ASP.NET Core 1.X, filtering is a second thought. Some logging givers, like the Console provider let you mention how to filter. Otherwise, you can apply filters to all the providers at a same time using the WithFilter method.

Developing logging delegates with the LoggerMessage Helper

The LoggerMessage class is present in MS.Extensions.Logging.Abstractions package, and has a number of fixed, generic Define methods that return an Action<> which in turn could be used to create strong-type logging extensions.

The strong-type logging extension methods

In this instance, we are going to log the time that the HomeController.Index action method produces:

public class HomeController : Controller

{

public IActionResult Index()

{

_logger.HomeControllerIndexExecuting(DateTimeOffset.Now);

return View();

}

}

The HomeControllerIndexExecuting approach is a custom extension approach that takes a DateTimeOffset parameter. We can define it as:

internal static class LoggerExtensions

{

private static Action<ILogger, DateTimeOffset, Exception> _homeControllerIndexExecuting;

static LoggerExtensions()

{

_homeControllerIndexExecuting = LoggerMessage.Define<DateTimeOffset>(

logLevel: LogLevel.Debug,

eventId: 1,

formatString: “Executing ‘Index’ action at ‘{StartTime}'”);

}

public static void HomeControllerIndexExecuting(

this ILogger logger, DateTimeOffset executeTime)

{

_homeControllerIndexExecuting(logger, executeTime, null);

}

}

The HomeControllerIndexExecuting approach is an ILogger extension method which cites a fixed Action field on our fixed LoggerExtensions method. The _homeControllerIndexExecuting field is initiated by using the ASP.NET Core LoggerMessage.Define method, by giving a logLevel, an eventId and the formatString to create the log.

That seems like lots of effort. You could only call _logger.LogDebug() directly in the HomeControllerIndexExecuting extension method.

The goal is to improve performance by logging messages for unfiltered categories, without having to elaborately write: if(_logger.IsEnabled(LogLevel.Debug). The answer remains in the LoggerMessage.Define<T> approach.

The LoggerHelper.Define approach

The purpose this method is 3-fold:

Envelop the if statement to permit performant logging

Apply the correct strong-type parameters are passed when message is logged

Make sure that log message has the correct number of placeholders for parameters

Let’s summarise how the method appears:

public static class LoggerMessage

{

public static Action<ILogger, T1, Exception> Define<T1>(

LogLevel logLevel, EventId eventId, string formatString)

{

var formatter = CreateLogValuesFormatter(

formatString, expectedNamedParameterCount: 1);

return (logger, arg1, exception) =>

{

if (logger.IsEnabled(logLevel))

{

logger.Log(logLevel, eventId, new LogValues<T1>(formatter, arg1), exception, LogValues<T1>.Callback);

}

};

}

}

First, this does a check that the given format string, (“Executing ‘Index’ action at ‘{StartTime}'”) has the correct number of parameters named. Next, it gives back an action method with the required number of generic parameters. There are several overloads of the Define method, that takes 0-6 generic parameters, which depends on the number you would require for your custom message log.

We conclude now. Keep coding! 

If you want to enhance yourself in Dot Net Course and improve yourself through Dot NET training program; our institute, CRB Tech Solutions would be of great help and support. We offer well-structured program for the best Dot Net Course. Among many reputed institutes of dot net training and placement in Pune, CRB Tech has created a niche for itself.

Stay connected to CRB Tech for your technical up-gradation and to remain updated with all the happenings in the world of Dot Net.

Don't be shellfish...Buffer this pageEmail this to someoneDigg thisShare on FacebookShare on Google+Share on LinkedInPrint this pageShare on RedditPin on PinterestShare on StumbleUponTweet about this on TwitterShare on Tumblr

Error Handling In ASP.NET Core Web API

Why there is a need for an alternate approach?

In .Net Core, MVC and Web API have been mixed; hence you have similar controllers for both MVC and API activities. In spite of the resemblances, with regards to dealing with errors, it is more likely to have a desire to utilize an alternate approach for API errors.

MVC activities are executed because of a client action in the program so restoring a page of error to the program is the right approach. With an API, this is not the general situation.

Programming interface summons frequently back end code or javascript code and in either case, you never need to just show the response from the API. Rather you could check the status code and analysis the reaction to decide whether the activity was effective, showing information to the user as essential. An error page is not useful in such circumstances. It increases the response with HTML and makes customer code problematic on the grounds that JSON (or XML) is normal, not HTML.

Error Handling In ASP.NET Core Web API

While we have to return data in an alternate format for Web API activities, the procedures of taking care of errors are not same as MVC.

The minimalistic approach

With MVC activities, inability to show a friendly page of error is unsatisfactory in a good application. With an API, while not perfect, empty response bodies are significantly more admissible for some invalid demand sorts. Mainly restoring a 404 status code for an API pathway that does not exist might give the customer enough data to settle their code.

With no configuration, this is the thing that ASP.NET Core gives you.

Based upon your necessities, this might be satisfactory for some basic codes however it will once in a while be sufficient for approval failures. In the event where a customer passes you invalid information, getting a 400 Bad Request is not sufficiently useful for the customer to fins the issue. At the very least, you have to tell them which fields are inaccurate and you would restore an enlightening message for every disappointment.

With ASP.NET Web API, this is trifling. Expecting that you are utilizing model official, you get approval for nothing by utilizing information explanations as well as IValidatableObject. To restore the approval data to the customer as JSON is one simple line of code.

Here is a model:

public class GetProductRequest : IValidatableObject

{

[Required]

public string ProductId { get; set; }

public IEnumerable<ValidationResult> Validate(ValidationContext validationContext)

{

if (…)

{

yield return new ValidationResult(“ProductId is invalid”, new[] { “ProductId” });

}

}

}

And our controller action:

[HttpGet(“product”)]

public IActionResult GetProduct(GetProductRequest request)

{

if (!ModelState.IsValid)

{

return BadRequest(ModelState);

}

}

A missing ProductId leads to a 400 status code plus a JSON response body like the following:

{

“ProductId”:[“The ProductId field is required.”]

}

This offers a minimum for a customer to use your service and it is not hard to enhance this pattern and make an improved customer experience.

Returning extra information for particular errors

To be honest, it is easy to give extra information if we think a status code approach is a basic element. This is profoundly suggested. There are numerous circumstances where a status code by itself cannot decide the reason for failure. If we consider 404 status codes, for instance, this means:

We are making request to a wrong site (might be the “www” site as opposed to the “api” subdomain)

The domain is right but the URL does not match any route.

The URL takes the correct path but the resource doesn’t exist.

Here is an endeavour at managing the situation:

[HttpGet(“product”)]

public async Task<IActionResult> GetProduct(GetProductRequest request)

{

var model = await _db.Get(…);

if (model == null)

{

return NotFound(“Product not found”);

}

return Ok(model);

}

We are currently restoring a more valuable message yet it is a long way to be ideal. The principle concern is that by using a string in the NotFound process, the structure will restore this string as a plain content response and not JSON.

As a customer, an admin returning an alternate type of errors is more difficult to manage than a reliable JSON service.

This issue could rapidly be solved by changing the code to what appears beneath, but in the next segment, we will discuss a better option.

return NotFound(new { message = “Product not found” });

Customizing the response structure for stability

Creating unknown objects on the go is not the way to deal with; if you need a steady customer experience. In a perfect way, the API should give back a similar response structure in all the cases, even when the request was not successful.

Let’s explain a base ApiResponse class:

public class ApiResponse

{

public int StatusCode { get; }

[JsonProperty(NullValueHandling = NullValueHandling.Ignore)]

public string Message { get; }

public ApiResponse(int statusCode, message = null)

{

StatusCode = statusCode;

Message = message ?? GetDefaultMessageForStatusCode(statusCode);

}

private static string GetDefaultMessageForStatusCode(int statusCode)

{

switch (statusCode)

{

case 404:

return “Resource not found”;

case 500:

return “An unhandled error occurred”;

default:

return null;

}

}

}

There is also a requirement of derived ApiOkResponse class that allows in returning data:

public class ApiOkResponse : ApiResponse

{

public object Result { get; }

public ApiOkResponse(object result)

:base(200)

{

Result = result;

}

}

Finally, let’s announce an ApiBadRequestResponse class to handle validation errors.

public class ApiBadRequestResponse : ApiResponse

{

public IEnumerable<string> Errors { get; }

public ApiBadRequestResponse(ModelStateDictionary modelState)

: base(400)

{

if (modelState.IsValid)

{

throw new ArgumentException(“ModelState must be invalid”, nameof(modelState));

}

Errors = modelState.SelectMany(x => x.Value.Errors)

.Select(x => x.ErrorMessage).ToArray();

}

}

These codes are quite simple but can be customised as per requirements.

If action is changed to use these ApiResponse based classes, it becomes:

[HttpGet(“product”)]

public async Task<IActionResult> GetProduct(GetProductRequest request)

{

if (!ModelState.IsValid)

{

return BadRequest(new ApiBadRequestResponse(ModelState));

}

var model = await _db.Get(…);

if (model == null)

{

return NotFound(new ApiResponse(404, “Product not found with id {request.ProductId}”));

}

return Ok(new ApiOkResponse(model));

}

The code is little more tricky.

Centralising Logic Of Validation

Given that approval is something necessary, it makes to reconsider this code into an activity process. This reduces the measure of our actions, removes copied codes and enhances consistency.

public class ApiValidationFilterAttribute : ActionFilterAttribute

{

public override void OnActionExecuting(ActionExecutingContext context)

{

if (!context.ModelState.IsValid)

{

context.Result = new BadRequestObjectResult(newApiBadRequestResponse(ModelState)));

}

base.OnActionExecuting(context);

}

}

Handling all errors

Responding to bad contribution to controller activities is an ideal approach to give particular error data to the customer. Again and again, we have to respond to more nonspecific issues. Cases which include:

A 401 Unauthorized code comes back from security middleware.

A request for URL that does not guide to a controller action hence resulting into a, 404.

Global special cases. Unless you take care of a particular exception, you should not mess your actions.

Likewise, with MVC the least demanding approach to manage worldwide errors is by utilizing StatusCodePagesWithReExecute and UseExceptionHandler.

We discussed StatusCodePagesWithReExecute, but jkust a revision; when a non-achievement status code comes back from internal middleware, the middleware allows you to produce another action to manage the status code and restore a custom response.

UseExceptionHandler works likewise, getting and logging unhandled special cases and enables you to yield another action to deal with the mistake. Here, we design both the pieces of middleware to point to a similar action.

We add middleware in startup.cs:

app.UseStatusCodePagesWithReExecute(“/error/{0}”);

app.UseExceptionHandler(“/error/500″);

//register other middleware that might return a non-success status code

Then we add our error handling action:

[HttpGet(“error/{code}”)]

public IActionResult Error(int code)

{

return new ObjectResult(new ApiResponse(code));

}

With this set up, all exemptions and non-achievement status codes (without a reaction body) will be taken care of by our mistake activity where we restore our standard ApiResponse.

Custom Middleware

For a definite control, you can supplement built-in-middleware with your custom middleware. This handles any response and returns the basic ApiResponse objects as JSON. The fact is that, this is utilized as a part in addition with code in the actions to return ApiResponse objects, we can make sure that both achievement and failure responses share a similar structure and all requests lead to both a status code and a predictable JSON body:

public class ErrorWrappingMiddleware

{

private readonly RequestDelegate _next;

private readonly ILogger<ErrorWrappingMiddleware> _logger;

public ErrorWrappingMiddleware(RequestDelegate next, ILogger<ErrorWrappingMiddleware> logger)

{

_next = next;

_logger = logger ?? throw new ArgumentNullException(nameof(logger));

}

public async Task Invoke(HttpContext context)

{

try

{

await _next.Invoke(context);

}

catch(Exception ex)

{

_logger.LogError(EventIds.GlobalException, ex, ex.Message);

context.Response.StatusCode = 500;

}

if (!context.Response.HasStarted)

{

context.Response.ContentType = “application/json”;

var response = new ApiResponse(context.Response.StatusCode);

var json = JsonConvert.SerializeObject(response, new JsonSerializerSettings

{

ContractResolver = new CamelCasePropertyNamesContractResolver()

});

await context.Response.WriteAsync(json);

}

}

}

Summary:

Managing errors in ASP.NET Core APIs is likewise but different from MVC error code. At the action phase, we want to give back custom objects rather than custom views.

For non-specific errors, you could use the StatusCodePagesWithReExecute middleware but need to edit the code to return an ObjectResult and not a ViewResult.

We conclude now. Keep coding!

If you want to improve your skill in Dot Net Course and excel yourself in Dot NET training program; our institute, CRB Tech Solutions would be of great help and support. We offer well structured program for the best Dot Net Course. Among many reputed institutes, CRB Tech has created a niche for itself.

Stay connected to CRB Tech for your technical up gradation and to remain updated with all the happenings in the world of Dot Net.

Don't be shellfish...Buffer this pageEmail this to someoneDigg thisShare on FacebookShare on Google+Share on LinkedInPrint this pageShare on RedditPin on PinterestShare on StumbleUponTweet about this on TwitterShare on Tumblr

Making Use Of Bitbucket For Git in Visual Studio

Get connected to a Bitbucket from Visual Studio 2017 and use Git features to create and clone a repository, commit, create and merge branches and create and use all requests.

Making Use Of Bitbucket For Git in Visual Studio

Knowing that many of you are Visual Studio users, in this article, we shall discuss how to create a Bitbucket account, how to add an extension to Visual Studio and how to use various Bitbucket commands like, commit, commit and push, commit and sync, pull request, merge etc. with the help of Visual Studio 2017.

An Overview

Bitbucket is an Atlassian tool for source and version control. It makes use of either Mercurial or Git. It is a web-based service which is available for free up to the maximum five users (very similar to Visual Studio Team Services or VSTS). Bitbucket is also available on-premises.

Bitbucket supports Git which is a decentralized version control system.

With a decentralized version control system, every user could work with his/her own working copy without putting any impact on any other installations on the cloud. Each user can maintain their own repository that has all the versions of the software that is under development and when required, merging with other repositories is possible. There is no requirement of a network as every user is working with a local working copy.

Bitbucket is useful as it gives you as several private repositories as needed. These repos are free, Git gives us a number of public repositories, but not private.

If you are looking for open source development, you need to choose git with Github. Bitbucket provides all the features that git supports.

Bitbucket also provides project management tools as it comes as a part of Atlassian products. Integration to Jira is also available. Bitbucket offers shared repositories amongst team members who are working on a project.

In any case, if the teams use centralized version control system and shift to distributed version control system, they do have a steep learning curve.

Atlassian Bitbucket Server is the Git repository management solution for enterprise teams. It allows everyone in your organization to easily collaborate on your Git repositories.

This page will help you through the basics of Bitbucket Server. By the end you must know how to:

  • Create accounts for collaborators, and organize them into groups with permissions.
  • Develop a project and set up permissions.
  • Develop repositories, and know the simple commands for interacting with them.

Assumptions

This guide imagines that you don’t have earlier experience with Git. But we could assume that:

  • You have Git version 1.7.6 or higher on your local computer.
  • You are making use of a supported browser.
  • You have Bitbucket Server installed and running. If you haven’t, you must take a look at getting started.

Please do refer Git resources for tips on starting with Git.

The first thing you could do in Bitbucket Server is to add collaborators.

To add users in Bitbucket Server

  1. Go to the Bitbucket Server admin area by clicking the cog, then click Users in the Admin screen(under Accounts):
  2. Click Create user to visit directly to the user creation form.
  3. Once you’ve created a user, click Change permissions to make a set up of their access permissions.

There are 4 levels of user authentication:

  • System Admin— can access all the configuration settings of the Bitbucket Server.
  • Admin — same as the System Admins, but they can’t modify file paths or the Bitbucket Server settings.
  • Project Creator — can develop, edit and delete projects.
  • Bitbucket Server User — active users who can access Bitbucket Server.

Keep coding!

And do refer Microsoft’s site to gain more information.

If you want to improve your skill in Dot Net Course and excel yourself in Dot NET training program; our institute, CRB Tech Solutions would be of great help and for you. Come and join us with our well-structured program for the best Dot Net Course. Among many reputed institutes, CRB Tech has craved its niche among the best dot net institutes.

Stay connected to CRB Tech .for more technical optimization and other updates and information.

Don't be shellfish...Buffer this pageEmail this to someoneDigg thisShare on FacebookShare on Google+Share on LinkedInPrint this pageShare on RedditPin on PinterestShare on StumbleUponTweet about this on TwitterShare on Tumblr

What is manifest in .Net?

Manifest maintains information regarding assemblies which include name locale, version and an optional strong name that identifies the assembly. The manifest information is made used by the CLR. The manifest also has the security demands to verify this assembly. It also has the names and hashes of all the files that make up the assembly.

Manifest in .Net Framework

The .NET assembly manifest has s a cryptographic hash of several modules in the assembly. And when the assembly is loaded, the CLR recalculates the hash of the modules at hand, and compares it with the embedded hash. If the hash produced at runtime is different from that in the manifest, .NET refuses to load the assembly and puts an exception.

Manifest explains a relationship and dependencies of components in the assembly.

The versioning information, scope information and the security permissions are needed by the assembly.

An Assembly data like version, security information, scope,etc. is kept in manifest.

The manifest has a reference to the classes and resource.

The manifest is located in an .exe or a .dll with Microsoft intermediate language (MSIL) code.

This file has metadata about .NET assemblies.

It has a collection of data that explains how the elements in the assembly are related to each other.

Place of storage
It could be stored in Portable Executable (PE) file (.exe or .dll) with Microsoft Intermediate Language(MSIL) code.

You could add or change some information in the Assembly Manifest with the help of assembly features in your code.

You can see the manifest information for any managed DLL using ILDasm.

It performs the following functions:

– It lists the files that constitute the assembly.

– It gives a level of lack of straightforwardness between consumers of the assembly and the assemblies implementation details.

– It makes the assembly self-describing.

– It lists other assembly on which the assembly depends.

Summary

A manifest contains all the metadata needed to specify the assembly’s version requirements and security identity and all metadata required to define the scope of the assembly and overcome references to resources and classes.

Assembly name: It is the text string that specifies an assembly’s name. Version number: A major and minor version number, and a revision and build number. The CLR uses these numbers to impose version policy.

Culture: Information on the culture language the assembly supports. This information must be used only to designate an assembly as a satellite assembly containing culture- or specific language information.

Information about reference assemblies: A list of other assemblies that are statically referenced by the assembly. Each reference has the dependent assembly’s name, assembly metadata (version, culture, operating system, and so on), and public key, if the assembly is strong named.

Keep coding!

And do refer Microsoft’s site to gain more information.

If you want to improve your skill in Dot Net Course and excel yourself in Dot NET training program; our institute, CRB Tech Solutions would be of great help and for you. Come and join us with our well structured program for the best Dot Net Course. Among many reputed institutes, CRB Tech has craved its niche among the best dot net institutes.

Stay connected to CRB Tech for more technical optimization and other updates and information.

You may also like:

10 Entity Framework: 7 Features That You Should Know
The simplest way to reconstruct .Net Framework
Don't be shellfish...Buffer this pageEmail this to someoneDigg thisShare on FacebookShare on Google+Share on LinkedInPrint this pageShare on RedditPin on PinterestShare on StumbleUponTweet about this on TwitterShare on Tumblr

ASP.NET Master Pages & Its Advantages

ASP.NET Master pages help you create a fixed layout for pages in your application. A single master page tells about the look and feel and standard behavior that you want for all of the pages in your application.

ASP.NET-Master-Pages

Working with Master Pages

Master pages have two parts, the master page itself and one or more content pages.

Master Page

A master page is an ASP.NET file with the extension .master with a predefined layout that can include static text, HTML elements, and server controls. The master page is defined by a special @Master directive that replaces the @Page directive which is used for ordinary .aspx pages.

The @Master directive has most of the same directives that a @Control directive could contain.

In addition to the @Master directive, the master page also has all of the top-level HTML elements for a page. You can use any HTML and ASP.NET elements as part of your master page.

Replaceable Content Placeholders

In addition to the static text and controls that will appear on all pages, the master page also includes one or more ContentPlaceHolder controls. These define regions where replaceable content will appear. In turn, the replaceable content is defined in content pages. After you finished defining the ContentPlaceHolder controls, a master page might look like the following.

C# / VB

<%@ Master Language=”C#” %>

<!DOCTYPE html PUBLIC “-//W3C//DTD XHTML

1.1//EN” “http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd”>

<html xmlns=”http://www.w3.org/1999/xhtml” >

<head runat=”server” >

<title>Master page title</title>

</head>

<body>

<form id=”form1″ runat=”server”>

<table>

<tr>

<td><asp:contentplaceholder id=”Main” runat=”server” /></td>

<td><asp:contentplaceholder id=”Footer” runat=”server” /></td>

</tr>

</table>

</form>

</body>

</html>

Content Pages

You define the content by developing the individual content pages, which are ASP.NET pages that are bound to a specific master page. The binding is established in the content page’s @Page directive by including a MasterPageFile feature that points to the master page to be used.

In the content page, you develop the content by adding Content controls and tracing them to ContentPlaceHolder controls on the master page.

How to Replace placeholder content

After creating Content controls, you add text and controls to them. In a content page, anything that is not inside the Content controls results in an error. You could perform any tasks in a content page that you would do in an ASP.NET page. For example, you can produce content for a Content control by using server controls and database queries or other dynamic mechanisms.

A content page would look like this:

VB

<% @ Page Language=”VB” MasterPageFile=”~/Master.master” Title=”Content Page 1″ %>

<asp:Content ID=”Content1″ ContentPlaceHolderID=”Main” Runat=”Server”>

Main content.

</asp:Content>

<asp:Content ID=”Content2″ ContentPlaceHolderID=”Footer” Runat=”Server” >

Footer content.

</asp:content>

[C#]

<% @ Page Language=”C#” MasterPageFile=”~/Master.master” Title=”Content Page 1″ %>

<asp:Content ID=”Content1″ ContentPlaceHolderID=”Main” Runat=”Server”>

Main content.

</asp:Content>

<asp:Content ID=”Content2″ ContentPlaceHolderID=”Footer” Runat=”Server” >

Footer content.

</asp:content>

The @Page directive binds the content page to a definite master page, and it defines a title for the page that will be merged into the master page. To be noted that the content page has no other markup outside of the Content controls.

You could create multiple master pages to define different layouts for different parts of your site, and a different set of content pages for each master page.

Advantages of Master Pages

They provide functionality that programmers have always created by copying existing code, or text, and control elements; and so on. Advantages are listed here:

  • They help you to centralize the common functionality of your pages such that you could make updates in just one place.
  • They make it easy to create one set of controls and code and apply the results to a set of pages.
  • They give you fine control over the layout of the final page by permitting you to control the placeholder controls.
  • They provide an object model which allows you to tailor the master page from separate content pages.

Run-time Behavior of Master Pages

At run time, master pages are managed in the following sequence:

  1. Users call for a page by typing the URL of a content page.
  2. The page being fetched, the @Page directive is read.
  3. The master page with an updated content is merged into the control tree of the content page.
  4. The content of individual Content controls is merged into the corresponding ContentPlaceHolder control in the master page.
  5. The resulting merged page is sent to the browser.

From a user’s perspective, the combined master and content pages is a single, discrete page. The URL of the page is that of the content page.

From the programming point of view, the two pages act as separate containers for their respective controls. The content page acts as a container for the master page. You can refer to public master-page members from code in the content page, as described in the next section.

To remember that the master page becomes a part of the content page. In effect, the master page acts in a similar way like a user control acts, as a child of the content page and as a container within that page. In the present case, the master page is the container for all of the server controls that are sent to the browser. The control tree for a merged master and content page looks like this:

Page

Master Page

(Master page markup and controls)

ContentPlaceHolder

Content page markup and server controls

(Master page markup and controls)

ContentPlaceHolder

Content page markup and server controls

(Master page markup and controls)

Paths of Master Page and Content Page

When a content page is called for, its content is merged with the master page, and the page runs in the context of the content page. For instance, if you get the CurrentExecutionFilePath property of the HttpRequest object, whether be in content page code or in the master page code, the path depicts the location of the content page.

The master page and content page don’t need to be in the same folder. As long as the MasterPageFile feature in the content page’s @Page directive resolves to a .master page, ASP.NET can merge the content and master pages into a single rendered page.

External Resources

Both the content page and the master page can have controls and elements that refer external resources. For instance, both might contain image controls that refer image files, or they might contain anchors that refer other pages.

Server Controls

In server controls of master pages, ASP.NET modifies the URLs of properties that refer external resources. At the run time, ASP.NET will modify the URL such that it resolves correctly in the context of the content page.

ASP.NET can modify URLs in the following situations:

  • URL being a property of an ASP.NET server control.
  • The property is referred internally in the control as being a URL. Practically, ASP.NET server control properties that are commonly used to refer external resources are marked in this way.

Master Pages and Themes

You can’t apply directly an ASP.NET theme to a master page. If you add a theme feature to the @Master directive, the page will raise an error when it runs.

However, themes could be applied to master pages under these circumstances:

  • If a theme is defined in a content page.
  • If the site as a whole is configured to use a theme by including a theme definition in the pages Element (ASP.NET Settings Schema) element.

How To Scope Master Pages

You can attach content pages to the master page at 3 levels:

  • At the page level: You could use a page directive on each content page to bind it to a master page.
  • At the application level:By making a setting in the pages element of the application’s configuration file (Web.config), you can specify that all ASP.NET pages (.aspx files) in the application automatically bind to a master page.

If you use this strategy, all the ASP.NET pages in the application that have Content controls are merged with the mentioned master page.

Keep coding!

And do refer Microsoft’s site to gain more information.

If you want to improve your skill in Dot Net Course and excel yourself in Dot Net Training; our institute, CRB Tech Solutions would be of great help and for you. Come and join us with our well-structured program for the best Dot Net Course. Among many reputed institutes, CRB Tech has craved its niche among the best dot net institutes.

Stay connected to CRB Tech for more technical optimization and other updates and information.

You May Also Like: 

What Are The Differences Between ASP and ASP.NET?
How to Improve Performance of Asp.net Website?

Don't be shellfish...Buffer this pageEmail this to someoneDigg thisShare on FacebookShare on Google+Share on LinkedInPrint this pageShare on RedditPin on PinterestShare on StumbleUponTweet about this on TwitterShare on Tumblr

What Do You Understand By HTTP Handlers and HTTP Modules

When a client calls a request for a resource located on the server in an ASP.NET application, each request is managed by the HTTP Handlers. Microsoft ASP.NET has a number of in-built HTTP Handlers which serves different files like .ASPX, .ASMX etc. Based on the extension of the file, the appropriate HTTP Handlers gets loaded which is mapped to the extension and is responsible for processing the ASP.NET request.

Dot Net HTTP Handlers and HTTP Modules

 

Custom HttpHandlers

You could also create your own custom HTTP Handlers and register it for request handling. You could register the HTTP Handlers in a Web.Config file using <HttpHandlers /> element. However, the registration will always be based on the version of IIS.

Let’s develop a small instance of custom HTTP Handlers. You can device a custom HTTP Handler using a simple class which implements an interface IHttpHandler. This interface offers one method ProcessRequest() and a property IsReusable.

  • ProcessRequest() – A method which gets requested when a request is received. Inside this method, you could call the HttpContext object that is passed as a parameter to the method. Using this object you can access the Request, Response and Server objects for implementing the processing logic.

  • IsReusable Property – When this handler is requested, the ProcessRequest method will process it. If the IsReusable property is set to true, the same handler will be used for processing other requests of the same type. If it is false, then after the request is processed, the handler object gets destroyed.

Let’s develop a custom handler as shown below -

public class CustomHandler:IHttpHandler

{

public bool IsReusable

{

get { return false; }

}

public void ProcessRequest(HttpContext context)

{

context.Response.Write(“<h1 style=’Color:#000066′>WelCome To Custom HttpHandler</h1>”);

context.Response.Write(“HttpHandler processed on – “ + DateTime.Now.ToString());

using (StreamWriter SW=new StreamWriter(@”E:\HandlerMessages.txt”,true))

{

SW.WriteLine(“The message date time is – “ + DateTime.Now.ToString());

SW.Close();

}

}

}

Now let’s configure our custom HttpHandler into Web.Config file as shown below -

<httpHandlers>

<add verb=”*” path=”*.curry” type=”CustomHandlerModuleExample.CustomHandler”/>

</httpHandlers>

Now add a simple text file with extension “.curry” and browse the “.curry” extension file.

In other words, an HttpHandler is often associated with a specific extension, and a practical usage includes dynamic image generation or modification.

What is an HTTP Module

Now let’s explore the HttpModule. HttpModule is a part of call, processing of ASP.NET. In a single request processing, there could be more than one modules which gets executed. HttpModules take part in processing of the request by handling the Application events. There are a number of events which you can handle during the HttpModule processing. For example – BeginRequest(), EndRequest(), AuthenticateRequest() etc.

You could also create custom HttpModules. You could design a custom HttpModule using a simple class which implements IHttpModule interface. This interface gives two methods:

  • Init() – This method takes a HttpApplication object as parameter which permits the HttpModule to register the events.

  • Dispose() – The logic of cleanup could be written in this method which will get executed before a garbage collection.

HttpModules can be registered in Web.Config using <httpModules/>

Let’s create a simple HttpModule as shown below which creates a log file on C: -

public class CustomHttpModule:IHttpModule

{

public void Init(HttpApplication context)

{

context.BeginRequest += new EventHandler(this.context_BeginRequest);

context.EndRequest += new EventHandler(this.context_EndRequest);

}

public void context_EndRequest(object sender, EventArgs e)

{

StreamWriter sw = new StreamWriter(@”C:\requestLog.txt”, true);

sw.WriteLine(“End Request called at “ + DateTime.Now.ToString());

sw.Close();

}

public void context_BeginRequest(object sender, EventArgs e)

{

StreamWriter sw = new StreamWriter(@”C:\requestLog.txt”, true);

sw.WriteLine(“Begin request called at “ + DateTime.Now.ToString());

sw.Close();

}

public void Dispose()

{

}

}

Let’s register the custom HttpModule into the Web.Config file as shown below -

<httpModules>

<add name=”DotNetCurryModule” type=”CustomHttpModule”/>

</httpModules>

Now request the same file in a browser “Hello.curry” and check the files on C drive. You will find a requestLog.txt.

In other words, an HttpModule will execute for every call of your application, irrespective of the extension used. Http Modules are mainly used for security, statistics, logging etc.

Keep coding!

Let us know your opinion in the comments sections below. And feel free to refer Microsoft’s site to gather more information.

If you want to improve your skill in Dot Net Course and excel yourself in Dot NET training program; our institute, CRB Tech Solutions would be of great help and for you. Come and join us with our well structured program for the best Dot Net Course. Among many reputed institutes, CRB Tech has craved its niche among the best dot net institutes.

Stay connected to CRB Tech for more technical optimization and other updates and information.

Don't be shellfish...Buffer this pageEmail this to someoneDigg thisShare on FacebookShare on Google+Share on LinkedInPrint this pageShare on RedditPin on PinterestShare on StumbleUponTweet about this on TwitterShare on Tumblr