All posts by crbtech

How To Deal Camel Casing In ASP.NET Core Web API

If you worked with Web API in ASP.NET Core, you must have noticed that while data serializing for the client, the ASP.NET Core Web API uses camel casing. In other words, if your server side C# class is like this as shown:

How To Deal Camel Casing In ASP.NET Core Web API

[Table (“Employees”)]

Public class Employee

{

[DatabaseGenerated(DatabaseGeneratedOption.Identity)]

[Required]

public int EmployeeID { get; set; }

[Required]

public string FirstName { get; set; }

[Required]

public string LastName { get; set; }

[Required]

public string City { get; set; }

}

Then post JSON serialization a sample Employee object will look like this:

{

“employeeID”:1,

“firstName”:”Nancy”,

“lastName”:”Davolio”,

“city”:”Seattle”

}

You will dee that all the property names will get converted to their camel case equivalents (EmployeeID to employeeID, FirstName to firstName and so on).

This default nature doesn’t post many problems if the client is a C# application (HttpClient based) but if the client is a JavaScript client you might need to change the behaviour. Even with JavaScript clients, the camel casing might not pose a problem in several cases. This is because camel casing is used a lot in JavaScript world. Mainly if you are using JS framework chances are that you will be using camel casing during data binding and similar things.

At times, you might want to preserve the casing of the original C# property names. Suppose you are moving out or reusing a piece of JavaScript code that uses the same casing as the C# class. There you might want to prevent that code from breaking. This needs that JSON serialization of ASP.NET Core to preserve the casing of an underlying C# class.

Although the default nature is to make use of camel casing, you can change that to preserve the original casing. This is how…

Open the Startup class of Web API application and look for the ConfigureServices() method. Currently you have the following call:

services.AddMvc();

Change that line to this:

services.AddMvc()

.AddJsonOptions(options =>

options.SerializerSettings.ContractResolver

= new DefaultContractResolver());

The above code makes use of AddJsonOptions() extension method. The AddJsonOptions() method next specifies the ContractResolver property to an instance of DefaultContractResolver class.

To mention, for the above code to compile correctly you must do the following:

Bring in Newtonsoft.Json.Serialization namespace

Sum up NuGet package for Microsoft.AspNetCore.Mvc.Formatters.Json

If you run the application next, you must see the JSON data.

There you will see that the camel casing is removed. Remember that DefaultContractResolver stores whatever is the casing of the C# class. It doesn’t automatically alter it to pascal casing.

You will get a clear picture here.

Observe the casing of the properties. The underlying C# class gets modified to use the casing. The same casing is stored during JSON serialization.

What if you need to exactly specify that you want camel casing? It’s quite simple. Only make use of CamelCasePropertyNamesContractResolver class. The following code shows how to do:

services.AddMvc()

.AddJsonOptions(options =>

options.SerializerSettings.ContractResolver

= new CamelCasePropertyNamesContractResolver());

Now you set the ContractResolver to a new example of CamelCasePropertyNamesContractResolver class. This will make use of camel casing during JSON serialization.

With this we conclude. Hope the discussion was helpful for you. Keep coding!

If you want to enhance yourself in Dot Net Course and improve yourself through Dot NET training program; our institute would be of great help and support. We offer a well structured program for the Best .Net Course. Among many reputed institutes of dot net training and placement in Pune, CRB Tech has created a niche for itself.

Stay connected to CRB Tech for your technical up-gradation and to remain updated with all the happenings in the world of Dot Net.

Don't be shellfish...Buffer this pageEmail this to someoneDigg thisShare on FacebookShare on Google+Share on LinkedInPrint this pageShare on RedditPin on PinterestShare on StumbleUponTweet about this on TwitterShare on Tumblr

Registering Custom Directories For Views In ASP.NET MVC

In ASP.NET MVC, when applications are created, the Views stay in Views directory for the Controller actions. For instance, by default it develops a Home controller with Index action, and if you watch in Solution Explorer in Views directory you get to see directory Views, Home, next Index.cshtml and get its action as shown below:

Registering Custom Directories For Views In ASP.NET MVC

publicclassHomeController: Controller

{

public ActionResult Index()

{

return View();

}

}

Views

As of obvious, it will first find Index.cshtml file in Views/Home folder and if it cannot find it there then it will find in View/Shared folder. If it do not find even there, then an exception will be thrown that view file is not found. Here is the exception text which is thrown:

The view ‘Index’ or its master were not found or not a single view engine supported the locations. The following locations were searched:

~/Views/Home/Index.aspx

~/Views/Home/Index.ascx

~/Views/Shared/Index.aspx

~/Views/Shared/Index.ascx

~/Views/Home/Index.cshtml

~/Views/Home/Index.vbhtml

~/Views/Shared/Index.cshtml

~/Views/Shared/Index.vbhtml

The same is the case for a partial view when you call return PartialView(), it first sees in the respective controller’s Views/Home directory in the case of HomeController and in a case of failure it finds in the View/Shared folder.

Now what if you made a separate directory for partial views in my Views folder and Shared folder like.

Views/Home/Partials and Views/Shared/Partial next you have to tell the ViewEngine to look in that directory also by writing the below mentioned code in Gloabl.asaxfileinApplication_Startevent.

Suppose there is a code and you return _LoginPartial.cshtml from Index action of the HomeController, now what will happen it will look in View/Home directory first and in failure it will look in View/Shared, but this time we have my partial views in separate directory named Partial for every controller and for shared as well, In this case HomeController partial views reside in the Views/Home/Partials and in Views/Shared/Partials:

publicclassHomeController: Controller

{

public ActionResult Index()

{

return View();

}

}

In this case, as well you will get the same exception as Engine will not be able to find the View file _LoginPartial.cshtml.

The beauty of ASP.Net MVC framework is the extensibility which you can perform according to your personal and business needs, one of them is that if you want to have your own directories structure for organizing your views you can enrol those directories with razor view engine, which will make your life little easy as you will not have to specify fully qualified path of the view, as razor knows that it needs to look for the view in those directories as well which you have registered with it.

What you need to do is to register this directory pattern in the application so that every time you call any View it must look in the directories as well in which you placed the View filesHere is the program for that

publicclassMvcApplication: System.Web.HttpApplication

{

protectedvoidApplication_Start()

{

AreaRegistration.RegisterAllAreas();

WebApiConfig.Register(GlobalConfiguration.Configuration);

FilterConfig.RegisterGlobalFilters(GlobalFilters.Filters);

RouteConfig.RegisterRoutes(RouteTable.Routes);

BundleConfig.RegisterBundles(BundleTable.Bundles);

AuthConfig.RegisterAuth();

RazorViewEnginerazorEngine = ViewEngines.Engines.OfType < RazorViewEngine > ().FirstOrDefault();

if (razorEngine != null)

{

varnewPartialViewFormats = new []

{

“~/Views/{1}/Partials/{0}.cshtml”,

“~/Views/Shared/Partials/{0}.cshtml”

};

razorEngine.PartialViewLocationFormats = razorEngine.PartialViewLocationFormats.Union(newPartialViewFormats).ToArray();

}

}

}

So when you will call return PartialView, it will look for Controller Views directory’s subdirectory named Partials as well and in case it not finds there it will look in both Views/Shared and Views/Shared/Partials.

In a similar way you could enrol the other directories or your Custom directory structure if you wish, and doing this way you won’t require specifying complete path for like return View(“~/Views/Shared/Paritals/Index.cshtml”), rather you can write then return View() if you want to load Index View and your action name is also Index which is being called upon, or if you want some other view to be provided or if any other action is called on and you want to return Index view next you can write return View(“Index”).

We conclude now. Keep coding!

If you want to enhance yourself in Dot Net Course and improve yourself through Dot NET training program; our Institute CRB Tech Solutions would be of great help and support. We offer well structured program for the Best .Net Course. Among many reputed institutes of dot net training and placement in Pune, CRB Tech has created a niche for itself.

Stay connected to CRB Tech for your technical up-gradation and to remain updated with all the happenings in the world of Dot Net.

Don't be shellfish...Buffer this pageEmail this to someoneDigg thisShare on FacebookShare on Google+Share on LinkedInPrint this pageShare on RedditPin on PinterestShare on StumbleUponTweet about this on TwitterShare on Tumblr

The Need Of Correlation IDs In ASP.Net

Correlation IDs is becoming a very common requirement when there is the concept of “microservices” which communicates over HTTP.

Correlation ID's ASP.Net

Why is there a need of correlation ID?

A problem develops in having multiple services separated into units is how we track the flow of a single user request through each of the individual services that might be involved in generating a response.

This is essential for logging and finding faults with the system. Once the request leaves the first API service and sends the control onto the backend API service, you begin to struggle to read the logs.

A failure there will impact the front end API service but those could appear to be two unrelated errors. We need a way to see the entire flow from the time when a request hits the front end API, via the backend API and back again.

Solution

Here comes in the concept of correlation IDs into play. A correlation ID is a unique identifier which is passed through the entire request flow and is later passed between the services.

When each service requires logging something, it could include this correlation ID, hence ensuring that we can track a full user request from start to finish.

Correlation ID Options

This middleware is quite simple and straight and has only two options.

public class CorrelationIdOptions

{

private const string DefaultHeader = “X-Correlation-ID”;

/// <summary>

/// The header field name where the correlation ID will be stored

/// </summary>

public string Header { get; set; } = DefaultHeader;

/// <summary>

/// Controls whether the correlation ID is returned in the response headers

/// </summary>

Public bool IncludeInResponse { get; set; } = Yes;

}

Both sending and receiving API need to use a theame header name to make sure that they can locate and pass on the correlation ID. Let this be a default to “X-Correlation-ID” which is a standard name for this type of header.

The second option decides whether the correlation ID is included in the HttpResponse.

Middleware

The main part of the solution is a piece of middleware and an extension method to give an easy way to register the middleware. The code structure follows the patterns defined as part of the ASP.NET Core. The middleware class looks like this:

public class CorrelationIdMiddleware

{

private readonly RequestDelegate _next;

private readonly CorrelationIdOptions _options;

public CorrelationIdMiddleware(RequestDelegate next, IOptions<CorrelationIdOptions> options)

{

if (options == null)

{

throw new ArgumentNullException(nameof(options));

}

_next = next ?? throw new ArgumentNullException(nameof(next));

_options = options.Value;

}

public Task Invoke(HttpContext context)

{

if (context.Request.Headers.TryGetValue(_options.Header, out StringValues correlationId))

{

context.TraceIdentifier = correlationId;

}

if (_options.IncludeInResponse)

{

// apply the correlation ID to the response header for client side tracking

context.Response.OnStarting(() =>

{

context.Response.Headers.Add(_options.Header, new[] { context.TraceIdentifier });

return Task.CompletedTask;

});

}

return _next(context);

}

}

The constructor accepts the configuration through the IOptions<T> concept.

The main Impose method, which is then called by the framework, is where the main thing happens.

Firstly, check for an existing correlation ID coming, through a request header. Take an advantage of one of the improvements from C# 7 which says that you can declare the out variable inline, rather than having to pre-declare it prior to the TryGetValue line.

The next block is by choice, which is in control of the IncludeInResponse configuration property. If true you use the OnStarting callback to allow us to safely include the correlation ID header in the response.

Extension Method

The final piece of code that is needed is to include some extension methods to make summing up the middleware to the pipeline easier for consumers. The code looks like the following mentioned here:

public static class CorrelationIdExtensions

{

public static IApplicationBuilder UseCorrelationId(this IApplicationBuilder app)

{

if (app == null)

{

throw new ArgumentNullException(nameof(app));

}

return app.UseMiddleware<CorrelationIdMiddleware>();

}

public static IApplicationBuilder UseCorrelationId(this IApplicationBuilder app, string header)

{

if (app == null)

{

throw new ArgumentNullException(nameof(app));

}

return app.UseCorrelationId(new CorrelationIdOptions

{  Header = header   });

}

public static IApplicationBuilder UseCorrelationId(this IApplicationBuilder app, CorrelationIdOptions options)

{

if (app == null)

{

throw new ArgumentNullException(nameof(app));

}

if (options == null)

{

throw new ArgumentNullException(nameof(options));

}

return app.UseMiddleware<CorrelationIdMiddleware>(Options.Create(options));

}

}

It gives a UseCorrelationId method which is an extension method on the IApplicationBuilder. They will register CorrelationIdMiddleware and if given with handle taking a custom name for the header, or a complete CorrelationIdOptions object which will make sure that the middleware behaves as expected.

Middleware represents an easy way to develop the logic needed by the concept of correlation IDs into ASP.NET Core application.

We conclude now. Keep coding!

If you want to enhance yourself in Dot Net Course and improve yourself through Dot NET training program; our institute CRB Tech Solutions would be of great help and support. We offer well structured program for the best Dot Net Course. Among many reputed institutes of dot net training and placement in Pune, CRB Tech has created a niche for itself.

Stay connected to CRB Tech for your technical up-gradation and to remain updated with all the happenings in the world of Dot Net.

Don't be shellfish...Buffer this pageEmail this to someoneDigg thisShare on FacebookShare on Google+Share on LinkedInPrint this pageShare on RedditPin on PinterestShare on StumbleUponTweet about this on TwitterShare on Tumblr

.Net Core Preview For SDK, Installed runtime & Framework host versions

.NET Core is the high performance implementation of .NET for developing web applications and services that run on Windows, Mac and Linux. It is an open source and it also shares the same code with .NET Framework and Xamarin apps.

On installing .NET Core you notice that you have several pieces that must be fit together; some of them which is optional. In a development environment, its more preferable to have the SDK, which is a development toolkit that has the CLI or command line tool which you make use to create and run core projects with, and other things (for example, language compilers).

 .Net Core Preview For SDK

Please note that when it is mentioned about .NET Core then it’s referring to one of the 3 main .NET runtimes, the other two are .NET Framework and Mono (for Xamarin).

The .NET Core runtime has a number of libraries: the framework libraries, other group of libraries which are the Core CLR, and then there is also the SDK, and the app host (shall discuss later).

To many people, the most confusing part is their different releases. The SDK does not match the runtime version, and you could install different releases of the SDK, and the runtime.

When you download the SDK, the .NET Core runtime of an any specific version is usually installed with it. If you require additional releases you could download them separately.

The mention ‘usually installed with the SDK’ means this depends. Irrespective of whether you download the SDK from the Microsoft downloads site or GitHub, there will be some info somewhere allowing you to know “if” and “which” runtimes come with it. For older downloads you need to take a look at the release notes if it is unclear. You must see something like this:

The .NET Core SDK 1.0.4 includes .NET Core 1.0.5 and 1.1.2 runtimes so downloading the runtime packages separately is not needed when installing the SDK.”

The command line tool that is made used while working with core projects is added to the path when you install SDK, and can be run by making use of the ‘dotnet’ aka for the executable. When you run dot net version you will get the SDK version (!), running dot net will print the host version also. To get the runtime versions which you have downloaded and installed you must look somewhere else.

It is often noticed that there is some confusion regarding, getting the version numbers, and on recent updating of all the services and environments you have to verify that you have the right version of the SDK and runtime on the servers and development machines.

Here is given an example of a function that has been added to one of the PowerShell Core modules for the above purpose:

Function Get-CoreInfo {

if(Test-Path “$env:programfiles/dotnet/”){

try{

[Collections.Generic.List[string]] $info = dotnet

$versionLineIndex = $info.FindIndex( {$args[0].ToString().ToLower() -like “*version*:*”} )

$runtimes = (ls “$env:programfiles/dotnet/shared/Microsoft.NETCore.App”).Name | Out-String

$sdkVersion = dotnet –version

$fhVersion = (($info[$versionLineIndex]).Split(‘:’)[1]).Trim()

return “SDK version: `r`n$sdkVersion`r`n`r`nInstalled runtime versions:`r`n$runtimes`r`nFramework Host:`r`n$fhVersion”

}

catch{

$errorMessage = $_.Exception.Message

Write-Host “Something went wrong`r`nError: $errorMessage”

}

}

else{

Write-Host ‘No SDK installed’

return “”

}

}

We conclude now. Keep coding!

If you want to enhance yourself in Dot Net Course and improve yourself through Dot NET training program; our institute, CRB Tech Solutions would be of great help and support. We offer well structured program for the best Dot Net Course.

Among many reputed institutes of dot net training and placement in Pune, CRB Tech has created a niche for itself.

Stay connected to CRB Tech for your technical up-gradation and to remain updated with all the happenings in the world of Dot Net.

Don't be shellfish...Buffer this pageEmail this to someoneDigg thisShare on FacebookShare on Google+Share on LinkedInPrint this pageShare on RedditPin on PinterestShare on StumbleUponTweet about this on TwitterShare on Tumblr

Defining Custom Logging Messages Using LoggerMessage.Define In ASP.NET Core

To mention, one of the nicest attributes introduced in ASP.NET Core is the universal logging facilities. In this post we shall discuss one of the helper methods in logging library, and how to make use of it inefficiently logging messages in the libraries.

LoggerMessage.Define In ASP.NET Core

Logging Overview

The logging facility is shown as ILogger<T> and ILoggerFactory interfaces, which you can apply into your services using dependency injection to log messages in various ways. For instance, in the following ProductController, a message is logged when the View action is cited.

public class ProductController : Controller

{

private readonly ILogger _logger;

public ProductController(ILoggerFactory loggerFactory)

{

_logger = loggerFactory.CreateLogger<ProductController>();

}

public IActionResult View(int id)

{

_logger.LogDebug(“View Product called with id {Id}”, id);

return View();

}

}

The ILogger could log message at several levels given by the LogLevel:

public enum LogLevel

{

Trace = 0,

Debug = 1,

Information = 2,

Warning = 3,

Error = 4,

Critical = 5,

}

The final reason of the logging infrastructure are logging givers. These are the “sinks” where the logs are written.You could plug in several providers, and write logs on a variety of different locations, for instance the console, to a file, to Serilog etc.

One good thing about the logging infrastructure and the universal use of DI in the ASP.NET Core libraries is that same interfaces and classes are used all over the libraries and also in your application.

How to control logs produced by different categories

When deevlopin a logger with CreateLogger<T>, the type name you write is used to develop a category for the logs. At the application level, you could choose which LogLevels are output for a given category.

For instance, you can mention that by default, Debug or higher level logs are written to the providers, but for logs written by services in the MS namespace, only logs of Warning level or higher are written.

With this method you can control the amount of logging produced by several libraries in your application, enhancing logging levels for only areas which need them.

Logs without filtering

If you notice, you’ll see that most of the logs come from internal components, from classes in MS namespace. It’s nothing but noise. You could filter out Warning logs in the MS namespace, but retain other logs at Debug level:

Logs with filtering

With a default ASP.NET Core 1.X template, you need to change the appsettings.json file, and set loglevels to Warning as appropriate:

{

“Logging”: {

“IncludeScopes”: false,

“LogLevel”: {

“Default”: “Debug”,

“System”: “Warning”,

“Microsoft”: “Warning”

}

}

}

Note: In ASP.NET Core 1.X, filtering is a second thought. Some logging givers, like the Console provider let you mention how to filter. Otherwise, you can apply filters to all the providers at a same time using the WithFilter method.

Developing logging delegates with the LoggerMessage Helper

The LoggerMessage class is present in MS.Extensions.Logging.Abstractions package, and has a number of fixed, generic Define methods that return an Action<> which in turn could be used to create strong-type logging extensions.

The strong-type logging extension methods

In this instance, we are going to log the time that the HomeController.Index action method produces:

public class HomeController : Controller

{

public IActionResult Index()

{

_logger.HomeControllerIndexExecuting(DateTimeOffset.Now);

return View();

}

}

The HomeControllerIndexExecuting approach is a custom extension approach that takes a DateTimeOffset parameter. We can define it as:

internal static class LoggerExtensions

{

private static Action<ILogger, DateTimeOffset, Exception> _homeControllerIndexExecuting;

static LoggerExtensions()

{

_homeControllerIndexExecuting = LoggerMessage.Define<DateTimeOffset>(

logLevel: LogLevel.Debug,

eventId: 1,

formatString: “Executing ‘Index’ action at ‘{StartTime}'”);

}

public static void HomeControllerIndexExecuting(

this ILogger logger, DateTimeOffset executeTime)

{

_homeControllerIndexExecuting(logger, executeTime, null);

}

}

The HomeControllerIndexExecuting approach is an ILogger extension method which cites a fixed Action field on our fixed LoggerExtensions method. The _homeControllerIndexExecuting field is initiated by using the ASP.NET Core LoggerMessage.Define method, by giving a logLevel, an eventId and the formatString to create the log.

That seems like lots of effort. You could only call _logger.LogDebug() directly in the HomeControllerIndexExecuting extension method.

The goal is to improve performance by logging messages for unfiltered categories, without having to elaborately write: if(_logger.IsEnabled(LogLevel.Debug). The answer remains in the LoggerMessage.Define<T> approach.

The LoggerHelper.Define approach

The purpose this method is 3-fold:

Envelop the if statement to permit performant logging

Apply the correct strong-type parameters are passed when message is logged

Make sure that log message has the correct number of placeholders for parameters

Let’s summarise how the method appears:

public static class LoggerMessage

{

public static Action<ILogger, T1, Exception> Define<T1>(

LogLevel logLevel, EventId eventId, string formatString)

{

var formatter = CreateLogValuesFormatter(

formatString, expectedNamedParameterCount: 1);

return (logger, arg1, exception) =>

{

if (logger.IsEnabled(logLevel))

{

logger.Log(logLevel, eventId, new LogValues<T1>(formatter, arg1), exception, LogValues<T1>.Callback);

}

};

}

}

First, this does a check that the given format string, (“Executing ‘Index’ action at ‘{StartTime}'”) has the correct number of parameters named. Next, it gives back an action method with the required number of generic parameters. There are several overloads of the Define method, that takes 0-6 generic parameters, which depends on the number you would require for your custom message log.

We conclude now. Keep coding! 

If you want to enhance yourself in Dot Net Course and improve yourself through Dot NET training program; our institute, CRB Tech Solutions would be of great help and support. We offer well-structured program for the best Dot Net Course. Among many reputed institutes of dot net training and placement in Pune, CRB Tech has created a niche for itself.

Stay connected to CRB Tech for your technical up-gradation and to remain updated with all the happenings in the world of Dot Net.

Don't be shellfish...Buffer this pageEmail this to someoneDigg thisShare on FacebookShare on Google+Share on LinkedInPrint this pageShare on RedditPin on PinterestShare on StumbleUponTweet about this on TwitterShare on Tumblr

Exploring CQRS within the Brighter .NET open source project

This article shall discuss the project called “Brighter.” It has been around in .NET space since many years and is in the process of moving to .NET Core for greater portability and performance.

Brighter is nothing but .NET Command Dispatcher, with Command Processor features for QoS and support for Task Queues.

Exploring CQRS within the Brighter .NET open source project

The Brighter project includes many libraries and examples that you could take in to support CQRS architecture styles in .NET. CQRS is the acronym for Command Query Responsibility Segregation. In the words of Martin Fowler ,” At its heart is the notion that you can use a different model to update information than the model you use to read information.” The Query Model studies and the Command Model updates or validates.

Brighter supports the “Distributed Task Queues” with which you could improve performance when you’re making use of a query or integrating with Microservices.

While creating distributed systems, Hello World is not the case of use. But, it is a good example in that it sidelines any business logic and shows you the elemental structure and concepts.

Suppose imagine there’s a command you wish to send. Say a GreetingCommand. A command could be either write or “do this” sort of command.

internal class GreetingCommand : Command

{

public GreetingCommand(string name)

:base(new Guid())

{

Name = name;

}

public string Name { get; private set; }

}

Suppose there is something else will “handle” these commands. This is the DoIt() method. Noweher its mentioned Handle() ourselves. Very much like dependency injection, you don’t need to call Handle() ourselves; the framework will do it that away.

internal class GreetingCommandHandler : RequestHandler<GreetingCommand>

{

[RequestLogging(step: 1, timing: HandlerTiming.Before)]

public override GreetingCommand Handle(GreetingCommand command)

{

Console.WriteLine(“Hello {0}”, command.Name);

return base.Handle(command);

}

}

Next register a factory that takes the handlers for types and returns. In a real system you could use IoC dependency injection for the mapping also.

The Main() has a registry that could be passed into a larger pipeline where you can set policy for processing commands. This pattern might feel familiar with “Builders” and “Handlers.”

private static void Main(string[] args)

{

var registry = new SubscriberRegistry();

registry.Register<GreetingCommand, GreetingCommandHandler>();

var builder = CommandProcessorBuilder.With()

.Handlers(new HandlerConfiguration(

subscriberRegistry: registry,

handlerFactory: new SimpleHandlerFactory()

))

.DefaultPolicy()

.NoTaskQueues()

.RequestContextFactory(new InMemoryRequestContextFactory());

var commandProcessor = builder.Build();

}

Once you have the commandProcessor, you could send commands to it quite easily and the work is completed. To mention, how you make the commands is up to you.

commandProcessor.Send(new GreetingCommand(“HanselCQRS”));

Methods within RequestHandlers has other behaviours associated like, in case of “[RequestLogging] on the Handle() method as shown above.

You could add other things like Validation, Retries, or Circuit Breakers. The fact is that Brighter gives a bunch of handlers that could operate on a Command.

One of the best things about Brighter is that it is prescriptive and not unhandy.

In short, Brighter is supposed to be a library and not a framework, so it is actually lightweight and divided into several packages that permit you to take in only facilities which you need in the project.

We conclude now. Keep coding!

If you want to enhance yourself in Dot Net Course and improve yourself through Dot NET training program; our institute, CRB Tech Solutions would be of great help and support. We offer well-structured program for the best Dot Net Course. Among many reputed institutes of dot net training and placement in Pune, CRB Tech has created a niche for itself.

Stay connected to CRB Tech for your technical up-gradation and to remain updated with all the happenings in the world of Dot Net.

Don't be shellfish...Buffer this pageEmail this to someoneDigg thisShare on FacebookShare on Google+Share on LinkedInPrint this pageShare on RedditPin on PinterestShare on StumbleUponTweet about this on TwitterShare on Tumblr

Options For The Configuration Of ASP.NET Core Application Settings

With the greater part of discussions about project configurations inside .NET Core, one thing that has been figured out how to be reliably JSON-based has in the utilization of usage settings configuration data in today’s world. While JSON reads quite easily, it can fast wind being jumbled if you are storing vast and complex items comprising of several objects and arrays, each with different properties of various sorts, and so on.Its great that you have many options.

The Configuration Of ASP.NET Core Application Settings

Enter the Options Framework

A very basic framework is Options framework which is mainly designed to handle how to access and configure POCO settings within .NET Core, and it is not easy to work with.

Suppose you have a configuration file that appears like the following:

{

“ApplicationLayout”: {

“LayoutChangingEnabled”: true,

“Layouts”: [

{

“Name”: “Standard”,

“Modules”: [

{

“Name”: “Foo”,

“Order”: 1

},

{

“Name”: “Bar”,

“Order”: 2

}

]

},

{

“Name”: “Detailed”,

“Modules”: [

{

“Name”: “Foo”,

“Order”: 1

},

{

“Name”: “Bar”,

“Order”: 2

},

{

“Name”: “Admin”,

“Order”: 3

}

]

}

]

}

}

If you want to reach out any individual items, it would be pretty hair drilling down into nested elements and jumble up your code. This is where Options framework works.

Options helps ti define a C# class to represent your configuration settings, and it will manage by binding your existing JSON configuration to this class with only a single line of code. Let’s see how this structure appears for our previous example:

public class ApplicationConfiguration

{

public ApplicationLayout Layout { get; set; }

}

public class ApplicationLayout

{

public bool LayoutChangingEnabled { get; set; }

public Layout[] Layouts{ get; set; }

}

public class Layout

{

public string Name { get; set; }

public Module[] Modules { get; set; }

}

public class Module

{

public string Name { get; set; }

public int Order { get; set; }

}

Next is the inclusion of Options NuGet package within the application:

Install-Package Microsoft.Extensions.Options

Next in the Startup.cs file, you can read the configuration file:

var config = new ConfigurationBuilder()

.AddJsonFile(“YourConfigurationFile.json”)

.Build();

Next comes the IOptions service that helps your well-typed configuration to be injected as a service anywhere within the application:

public void ConfigureServices(IServiceCollection services)

{

// Omitted for brevity

services.AddOptions();

services.Configure<ApplicationConfiguration>(config);

}

With all done, next you can input your Options within several locations in the application, that allows you to clearly access the data that you need from your configuration without making way through several layers:

public class FooController

{

private readonly ApplicationConfiguration _options;

public FooController(IOptions options)

{

// Access your options here

var canChangeLayout = options.Layout.LayoutChangingEnabled; // “true”

}

}

Now based on the situation, you might want to override one of the values in your settings. This requires the application of a delegate after initial configuration as shown below:

public void ConfigureServices(IServiceCollection services)

{

// Omitted for brevity

services.Configure<ApplicationConfiguration>(config);

// Update the configuration

services.Configure<ApplicationConfiguration>(options =>

{

options.Layout.LayoutChangingEnabled = false;

});

}

There are several more advanced cases that you might find beneficial which is based on the complexity of your application.

See Them In Action

if you are eager to see how they appear in action, it is highly recommend to browse through them if you want to tackle some advanced use-cases in your applications.

We conclude now. Keep coding!

If you want to enhance yourself in Dot Net Course and improve yourself through Dot NET training program; our institute, CRB Tech Solutions would be of great help and support. We offer well structured program for the best Dot Net Course. Among many reputed institutes, CRB Tech has created a niche for itself.

Stay connected to CRB Tech for your technical up-gradation and to remain updated with all the happenings in the world of Dot Net.

Don't be shellfish...Buffer this pageEmail this to someoneDigg thisShare on FacebookShare on Google+Share on LinkedInPrint this pageShare on RedditPin on PinterestShare on StumbleUponTweet about this on TwitterShare on Tumblr

An Introduction To Docker For .NET Developers

Two words you might often hear among.Net programmers; “Microservices” and “Docker”. Both are interesting topics and are giving feel of excitement to developers and architects.

In history, the front end developers had to use Windows VM to work with .NET projects. This process had some overhead and need to be streamlined.

Introduction To Docker For .NET Developers

The only solution is to provide Docker images of the back end services so that the front end developers could fast spin up on their development scenario. They can do this without the running a Windows VM and the result has been a great productivity gain.

Both containers and Docker works very well for use in coding.

Docker

You will get lots of information on Docker, but let’s distil the term for you. Docker is a containerisation technology and application platform that allows package and helps deploy an application or service as a separate unit with all of its dependencies. In other words, it can be considered as a very lightweight and self-contained virtual machine (VM).

Docker runs on top of a shared OS core, but in a different way. They are of lightweight and that is their advantage over the traditional VMs. You could often make better usage of the host devices by running more containers and sharing the underlying code. With a light footprint, with only minimum dependencies that they need; they can share the host resources with efficacy.

A Docker image could be as small as a few hundred megabytes and can begin in a fraction of a few seconds. This makes them good for scaling since extra containers could be started very fast in response to measuring triggers like a traffic increase or growing line. With conventional VM scaling you might need to wait for a few minutes before the extra capacity gets online, and by then the load peak can cause some issues already.

Key Concepts

Here I summarise the core components and terms that need to know.

Image

A docker image could be considered a unit of deployment. Images are defined by Docker files and once built are fixed. To customise an image later you can use it as a base image in your next docker file. Basically, you store built images in a registry of a container which later makes them available for people for referencing and run.

Container

A container is a running instance for a Docker image. When you begin an image using a Docker run command and on starting, your host will have an example of that running image.

Dockerfile

A dockerfile describes how Docker images and an application get deployed. It’s a basic file and you might only need a few lines to begin with your image. Docker images are created in layers. Microsoft offers several images for working with .NET Core applications.

Docker Compose

This file is a basic way to arrange multiple images. It makes use of the YML format to specify one or more containers that comprise a single system, or part of a system. Within this file you specify the images that must be started, what they hinge on, what ports they must begin on the host etc. With a single command you could build all the images. With the next single command you can tell Docker to run all the containers.

Host

The host is an underlying OS on which you run the Docker. Docker will use shared OS core resources to run your images. Until recently the host was a Linux device but with Microsoft releasing Microsoft containers, now it’s possible to use a Windows device as a host for images based on Windows.

We conclude now.

Keep coding!

If you want to enhance yourself in Dot Net Course and improve yourself through Dot NET training program; our institute, CRB Tech Solutions would be of great help and support. We offer well structured program for the best Dot Net Course. Among many reputed institutes, CRB Tech has created a niche for itself.

Stay connected to CRB Tech for your technical up-gradation and to remain updated with all the happenings in the world of Dot Net.

Don't be shellfish...Buffer this pageEmail this to someoneDigg thisShare on FacebookShare on Google+Share on LinkedInPrint this pageShare on RedditPin on PinterestShare on StumbleUponTweet about this on TwitterShare on Tumblr

How To Develop Windows Service In Visual Studio 2005

This is an illustration for a windows service which sends a good morning message through the mail each morning.This service runs every 60 minutes, if hour value == 9, then it sends the mail to the recipients.

First Develop a project….

File –> New –> Project

Then choose Windows in Project Types. In right side Templates panel choose WindowsService.

Develop_Windows_Service_In_Visual_Studio_2005

Give a name for the windows service.

Next the project will develop a service withService1.cs[design]. Rename it properly.

In that service, Click on the link which reads, click here to switch to code view


It opens up code window for service
Service1.cs.

Here write the following code.

using System;

using System.Collections.Generic;

using System.ComponentModel;

using System.Data;

using System.Diagnostics;

using System.ServiceProcess;

using System.Text;

using System.Timers;

using System.Net.NetworkInformation;

using System.Net.Security;

using System.Net.Mail;

namespace CHGoodMorningService

{

public partial class CHGoodMorningService : ServiceBase

{

public CHGoodMorningService()

{

InitializeComponent();

}

Timer CHTime=new Timer();

protected override void OnStart(string[] args)

{

CHTime.Elapsed += new

ElapsedEventHandler(OnElapsedTime);

CHTime.Interval = 3600000;

//One Hour- The service will run on its own, for every hour

CHTime.Enabled = true;

}

private void OnElapsedTime(object source, ElapsedEventArgs e)

{

MailMessage objMail = new MailMessage();

objMail.From=new MailAddress(“Sender-MailID”,”Sender

Name”);

objMail.To.Add(“Recipent MailID”);

objMail.Subject=”The Person says Good Morning to You.”;

objMail.IsBodyHtml=true;

objMail.Body = “Hi, Good Morning”;

if(DateTime.Now.Hour==9)

{

Send(objMail);

}

}

public static void Send(MailMessage msg)

{

SmtpClient SmtpMail = new SmtpClient();

SmtpMail.Host = “ServerName”;

SmtpMail.Credentials=new

System.Net.NetworkCredential(“mail”,”password”);

SmtpMail.Send(msg);

}

protected override void OnStop()

{

CHTime.Enabled=false;

}

}

}


Developing a windows service is finished but you have to create a setup project to install windows service.

Use a Setup project to install the Windows Service

After finishing the steps in “Develop a Windows Service project” section to arrange the Windows Service project, you could add a project for deployment, that packages the service application such that the service application could be installed. Toa achieve the same, follow the below mentioned steps:

1.Sum up another project to your CHGoodMorningService project.

a. In a Solution Explorer, right-press Solution ‘CHGoodMorningService’, then sum up, and finally click New Project.

b.Under Project Types, press Setup and Deployment Projects or Setup and Deployment.

c.Under Templates, press Setup Project.

d.In the Name box, write CHGoodMorningServiceSetup.

e.In the Location box, then type C:\, and next click OK.

Inform the deployment project, about what will be packed.

In Solution Explorer, right-press CHGoodMorningServiceSetup, press the Add, and then click the Project Output.

a.In the Add Project Output Group (dialouge box), click CHGoodMorningService.

b.Click Primary Output, and next click OK.

For correct installation, you must include only primary outlook. To finish up, the custom activities, follow the following steps:

  • In Solution Explorer, first, right-click on CHGoodMorningServiceSetup, point at View, and after that get the Custom Actions.Right-click Custom Actions, and then click Add Custom Action.
  • Click the Application Folder, and next click OK.
  • Click the Primary output from CHGoodMorningService, and next click OK.

To state that Setup projects are not present in the built in configuration. In order to create the solution, follow the below mentioned steps:

    1. Use either of the given methods:
  • Right-click CHGoodMorningService, and next click Build. Then, right-click CHGoodMorningServiceSetup, and next click Build.
  • To develop the whole solution at once, click Configuration Manager on the Build menu, and next click to choose the Build check box for CHGoodMorningServiceSetup.
    1. Press the combination, CTRL+SHIFT+B to develop the whole solution. When the solution is created, you have a complete Setup package for the service.

To install the service, you need to right-click CHGoodMorningServiceSetup, and then press Install.

Next, in the CHGoodMorningServiceSetup dialog box, click Next thrice. Note that a progress bar will appear as the Setup program installs the service.

Once the installation of tye service is complete, press Close

We conclude now. Keep coding!

If you want to enhance yourself in Dot Net Course and improve yourself through Dot NET training program; our institute, CRB Tech Solutions would be of great help and support. We offer well structured program for the best Dot Net Course. Among many reputed institutes, CRB Tech has created a niche for itself.

Stay connected to CRB Tech for your technical up gradation and to remain updated with all the happenings in the world of Dot Net.

Don't be shellfish...Buffer this pageEmail this to someoneDigg thisShare on FacebookShare on Google+Share on LinkedInPrint this pageShare on RedditPin on PinterestShare on StumbleUponTweet about this on TwitterShare on Tumblr

Error Handling In ASP.NET Core Web API

Why there is a need for an alternate approach?

In .Net Core, MVC and Web API have been mixed; hence you have similar controllers for both MVC and API activities. In spite of the resemblances, with regards to dealing with errors, it is more likely to have a desire to utilize an alternate approach for API errors.

MVC activities are executed because of a client action in the program so restoring a page of error to the program is the right approach. With an API, this is not the general situation.

Programming interface summons frequently back end code or javascript code and in either case, you never need to just show the response from the API. Rather you could check the status code and analysis the reaction to decide whether the activity was effective, showing information to the user as essential. An error page is not useful in such circumstances. It increases the response with HTML and makes customer code problematic on the grounds that JSON (or XML) is normal, not HTML.

Error Handling In ASP.NET Core Web API

While we have to return data in an alternate format for Web API activities, the procedures of taking care of errors are not same as MVC.

The minimalistic approach

With MVC activities, inability to show a friendly page of error is unsatisfactory in a good application. With an API, while not perfect, empty response bodies are significantly more admissible for some invalid demand sorts. Mainly restoring a 404 status code for an API pathway that does not exist might give the customer enough data to settle their code.

With no configuration, this is the thing that ASP.NET Core gives you.

Based upon your necessities, this might be satisfactory for some basic codes however it will once in a while be sufficient for approval failures. In the event where a customer passes you invalid information, getting a 400 Bad Request is not sufficiently useful for the customer to fins the issue. At the very least, you have to tell them which fields are inaccurate and you would restore an enlightening message for every disappointment.

With ASP.NET Web API, this is trifling. Expecting that you are utilizing model official, you get approval for nothing by utilizing information explanations as well as IValidatableObject. To restore the approval data to the customer as JSON is one simple line of code.

Here is a model:

public class GetProductRequest : IValidatableObject

{

[Required]

public string ProductId { get; set; }

public IEnumerable<ValidationResult> Validate(ValidationContext validationContext)

{

if (…)

{

yield return new ValidationResult(“ProductId is invalid”, new[] { “ProductId” });

}

}

}

And our controller action:

[HttpGet(“product”)]

public IActionResult GetProduct(GetProductRequest request)

{

if (!ModelState.IsValid)

{

return BadRequest(ModelState);

}

}

A missing ProductId leads to a 400 status code plus a JSON response body like the following:

{

“ProductId”:[“The ProductId field is required.”]

}

This offers a minimum for a customer to use your service and it is not hard to enhance this pattern and make an improved customer experience.

Returning extra information for particular errors

To be honest, it is easy to give extra information if we think a status code approach is a basic element. This is profoundly suggested. There are numerous circumstances where a status code by itself cannot decide the reason for failure. If we consider 404 status codes, for instance, this means:

We are making request to a wrong site (might be the “www” site as opposed to the “api” subdomain)

The domain is right but the URL does not match any route.

The URL takes the correct path but the resource doesn’t exist.

Here is an endeavour at managing the situation:

[HttpGet(“product”)]

public async Task<IActionResult> GetProduct(GetProductRequest request)

{

var model = await _db.Get(…);

if (model == null)

{

return NotFound(“Product not found”);

}

return Ok(model);

}

We are currently restoring a more valuable message yet it is a long way to be ideal. The principle concern is that by using a string in the NotFound process, the structure will restore this string as a plain content response and not JSON.

As a customer, an admin returning an alternate type of errors is more difficult to manage than a reliable JSON service.

This issue could rapidly be solved by changing the code to what appears beneath, but in the next segment, we will discuss a better option.

return NotFound(new { message = “Product not found” });

Customizing the response structure for stability

Creating unknown objects on the go is not the way to deal with; if you need a steady customer experience. In a perfect way, the API should give back a similar response structure in all the cases, even when the request was not successful.

Let’s explain a base ApiResponse class:

public class ApiResponse

{

public int StatusCode { get; }

[JsonProperty(NullValueHandling = NullValueHandling.Ignore)]

public string Message { get; }

public ApiResponse(int statusCode, message = null)

{

StatusCode = statusCode;

Message = message ?? GetDefaultMessageForStatusCode(statusCode);

}

private static string GetDefaultMessageForStatusCode(int statusCode)

{

switch (statusCode)

{

case 404:

return “Resource not found”;

case 500:

return “An unhandled error occurred”;

default:

return null;

}

}

}

There is also a requirement of derived ApiOkResponse class that allows in returning data:

public class ApiOkResponse : ApiResponse

{

public object Result { get; }

public ApiOkResponse(object result)

:base(200)

{

Result = result;

}

}

Finally, let’s announce an ApiBadRequestResponse class to handle validation errors.

public class ApiBadRequestResponse : ApiResponse

{

public IEnumerable<string> Errors { get; }

public ApiBadRequestResponse(ModelStateDictionary modelState)

: base(400)

{

if (modelState.IsValid)

{

throw new ArgumentException(“ModelState must be invalid”, nameof(modelState));

}

Errors = modelState.SelectMany(x => x.Value.Errors)

.Select(x => x.ErrorMessage).ToArray();

}

}

These codes are quite simple but can be customised as per requirements.

If action is changed to use these ApiResponse based classes, it becomes:

[HttpGet(“product”)]

public async Task<IActionResult> GetProduct(GetProductRequest request)

{

if (!ModelState.IsValid)

{

return BadRequest(new ApiBadRequestResponse(ModelState));

}

var model = await _db.Get(…);

if (model == null)

{

return NotFound(new ApiResponse(404, “Product not found with id {request.ProductId}”));

}

return Ok(new ApiOkResponse(model));

}

The code is little more tricky.

Centralising Logic Of Validation

Given that approval is something necessary, it makes to reconsider this code into an activity process. This reduces the measure of our actions, removes copied codes and enhances consistency.

public class ApiValidationFilterAttribute : ActionFilterAttribute

{

public override void OnActionExecuting(ActionExecutingContext context)

{

if (!context.ModelState.IsValid)

{

context.Result = new BadRequestObjectResult(newApiBadRequestResponse(ModelState)));

}

base.OnActionExecuting(context);

}

}

Handling all errors

Responding to bad contribution to controller activities is an ideal approach to give particular error data to the customer. Again and again, we have to respond to more nonspecific issues. Cases which include:

A 401 Unauthorized code comes back from security middleware.

A request for URL that does not guide to a controller action hence resulting into a, 404.

Global special cases. Unless you take care of a particular exception, you should not mess your actions.

Likewise, with MVC the least demanding approach to manage worldwide errors is by utilizing StatusCodePagesWithReExecute and UseExceptionHandler.

We discussed StatusCodePagesWithReExecute, but jkust a revision; when a non-achievement status code comes back from internal middleware, the middleware allows you to produce another action to manage the status code and restore a custom response.

UseExceptionHandler works likewise, getting and logging unhandled special cases and enables you to yield another action to deal with the mistake. Here, we design both the pieces of middleware to point to a similar action.

We add middleware in startup.cs:

app.UseStatusCodePagesWithReExecute(“/error/{0}”);

app.UseExceptionHandler(“/error/500″);

//register other middleware that might return a non-success status code

Then we add our error handling action:

[HttpGet(“error/{code}”)]

public IActionResult Error(int code)

{

return new ObjectResult(new ApiResponse(code));

}

With this set up, all exemptions and non-achievement status codes (without a reaction body) will be taken care of by our mistake activity where we restore our standard ApiResponse.

Custom Middleware

For a definite control, you can supplement built-in-middleware with your custom middleware. This handles any response and returns the basic ApiResponse objects as JSON. The fact is that, this is utilized as a part in addition with code in the actions to return ApiResponse objects, we can make sure that both achievement and failure responses share a similar structure and all requests lead to both a status code and a predictable JSON body:

public class ErrorWrappingMiddleware

{

private readonly RequestDelegate _next;

private readonly ILogger<ErrorWrappingMiddleware> _logger;

public ErrorWrappingMiddleware(RequestDelegate next, ILogger<ErrorWrappingMiddleware> logger)

{

_next = next;

_logger = logger ?? throw new ArgumentNullException(nameof(logger));

}

public async Task Invoke(HttpContext context)

{

try

{

await _next.Invoke(context);

}

catch(Exception ex)

{

_logger.LogError(EventIds.GlobalException, ex, ex.Message);

context.Response.StatusCode = 500;

}

if (!context.Response.HasStarted)

{

context.Response.ContentType = “application/json”;

var response = new ApiResponse(context.Response.StatusCode);

var json = JsonConvert.SerializeObject(response, new JsonSerializerSettings

{

ContractResolver = new CamelCasePropertyNamesContractResolver()

});

await context.Response.WriteAsync(json);

}

}

}

Summary:

Managing errors in ASP.NET Core APIs is likewise but different from MVC error code. At the action phase, we want to give back custom objects rather than custom views.

For non-specific errors, you could use the StatusCodePagesWithReExecute middleware but need to edit the code to return an ObjectResult and not a ViewResult.

We conclude now. Keep coding!

If you want to improve your skill in Dot Net Course and excel yourself in Dot NET training program; our institute, CRB Tech Solutions would be of great help and support. We offer well structured program for the best Dot Net Course. Among many reputed institutes, CRB Tech has created a niche for itself.

Stay connected to CRB Tech for your technical up gradation and to remain updated with all the happenings in the world of Dot Net.

Don't be shellfish...Buffer this pageEmail this to someoneDigg thisShare on FacebookShare on Google+Share on LinkedInPrint this pageShare on RedditPin on PinterestShare on StumbleUponTweet about this on TwitterShare on Tumblr