Dot NET Application Architecture Guidance

The .NET Team produced several guidance for developing web applications, Microservices and container based applications and Xamarin.Forms mobile apps. You can take advantage of this guidance to help build your applications according to the accepted industry patterns with .NET and C#. You will get the guidance on .NET Application Architecture center.

.NET Application Architecture Guidance

The guidance comes in the format of eBooks and end to end sample reference applications.

There are 2 end-to-end reference architecture applications that the guides make use for examples and that you could use as part of learning and apply your architectural guidance.

The reference applications will show you how to design microservices, web apps and Xamarin.Forms mobile apps.

This blog post will be useful if you are trying to explore about one of the architectural areas and might be not ready to plunge into in-depth guidance.

There are 4 application architecture areas which you could explore:

Microservices and Docker containers: Architecture, Patterns and Development Guidance

Web Applications with ASP.NET Core Architecture and Patterns Guidance

Production Ready Cloud applications with Azure Architecture Guidance

Mobile Apps with Xamarin.Forms Architecture and also Patterns guidance

What’s next for Architectural Guidance!

The guides and samples are only the first part of the guidance. You must have noticed that there is an important area which is missing on the architecture page. Hence, the VS Tools for UWP team and the Windows team are sincerely working on comparable guidance for desktop apps, we shall keep updating on the same.

In addition, there are advances in other efforts related to “Production Ready Cloud Applications based on Azure”.

The Xamarin team is also trying to evolve the Xamarin.Forms guidance while releasing new updates in the product.

As we mentioned in the beginning of the blog post, check out the center page for .NET Application Architecture Centre, download the several eBooks/Guides and see the reference applications from there.

We conclude with this. Keep visiting us to know more about the Asp.Net World. It’s really vast and expanding.

That’s it for now!

Keep coding!

If you want to enhance yourself in Dot Net Course and improve yourself through Dot NET training program; our institute would be of great help and support. We offer well structured program for the Best Dot Net Training Course. Among many reputed institutes of dot net training and placement in Pune, CRB Tech has created a niche for itself.

Stay connected to CRB Tech for your technical up-gradation and to remain updated with all the happenings in the world of Dot Net.

Don't be shellfish...Buffer this pageEmail this to someoneDigg thisShare on FacebookShare on Google+Share on LinkedInPrint this pageShare on RedditPin on PinterestShare on StumbleUponTweet about this on TwitterShare on Tumblr

Top 5 Tools For .NET Development

.Net development tools

Could you imagine an artist painting without brushes or Michelangelo sculpturing without chisels? Obviously, no. The fact is similar to web developers who can’t build web apps of enterprise level without the hold of some right tools.

Top 5 Tools for .NET Development

In this article we discuss the top 5 list of helpful tools .Net application development experts are using.

Microsoft Visual Studio Extensions

The Visual Studio is the core web tool for software development programmers’ everyday usage. It gives nearly everything what one might need for development, to test or debug web apps. The current Visual Studio 2013 release is widely used in .Net app development, as well.

NuGet

A free and an open-source package management system focused on developers working on the .NET platform which aims to simplify the incorporation of third party libraries in the .NET app during the development.

Web Essentials

Its name says about it very well. It is an absolutely essential VS plug-in that represents a productivity enhancing tool which helps you write HTML, CSS, JS and LESS faster. The tool’s best attributes are merged into the latest Visual Studio version.

Resharper

It is unfortunately not a free tool but is the best code refactoring and productivity VS plug-in. It adds a great productivity boost to the programming style and allows programmers save their time and effort as it depicts errors in advance.

Other Tools

Version Control Tools

This gives the ability to track the changes made as well as reverse if necessary. They are not only important when maintaining a project history, but are also the basis for team collaboration. You can utilize this tool across your projects irrespective of their size.

Browser Development or Debug Tools

Many of the .Net developers opt to use Chrome with its Developer Tools, and Firefox with its Firebug which permits examination of each aspect on their web page. Likewise, Internet Explorer and some other browsers have their own development and debug tools.

That’s it for now! Keep coding!

If you want to enhance yourself in Dot Net Course and improve yourself through Dot NET training program; our institute would be of great help and support. We offer well structured program for the Best Dot Net Training Course. Among many reputed institutes of dot net training and placement in Pune, CRB Tech has created a niche for itself.

Stay connected to CRB Tech for your technical up-gradation and to remain updated with all the happenings in the world of Dot Net.

Continue reading

Don't be shellfish...Buffer this pageEmail this to someoneDigg thisShare on FacebookShare on Google+Share on LinkedInPrint this pageShare on RedditPin on PinterestShare on StumbleUponTweet about this on TwitterShare on Tumblr

Key Things You Must Know About ASP.NET CORE

ASP.NET core has many benefits like lightweight, able to use Gulp, Bower, and Yeoman, can be developed using a number of text editors like VS Code, Atom, Sublime Text, optimized CLR and so on.

Key Things You Must Know About ASP.NET CORE

Here we discuss a few main benefits :

Benefits of .NET Core -

Independent Platform

This helps them overcome the problems faced by developers while deploying a Linux machine or a Mac machine. The application can be deployed on any OS – windows, mac or Linux. The web server on which the application is hosted is Kestrel.. This server helps the developers to save and refresh to get updates than to compile and debug. To make the developers use any OS to develop the asp.net applications, Microsoft has launched the VS Code that has many of the functions offered in VS but is extremely light weight.

Middleware and Dependency Injection

Dependency injection is significant when developing big applications to keep loose coupling between classes and to provide separation and concerns and most importantly the ability to unit test easily. ASP.NET Core has input the dependency injection into a framework. Using several third party tools, the unit testing can be done well.

Middlewares changes the way a request and response is served. Middlewares can be stacked one above the other and hence when a middleware forwards the request to the next one on the stack, it sees the request.

OpenSource

Another big initiative by Microsoft is to make the ASP.NET Core open source. Since it is an open source, the developer community is improving and the documentation constantly gets updated such that beginners don’t need to spend time searching where to begin.

Simplified Structure and Single Framework

The structure is simplified with no App_Data and App_Start. The Global.asax is replaced by Startup.cs, where all the middleware components will be registered and the start point of the application.

In earlier versions of ASP.NET, to have both MVC and API, we need two types of projects to accomplish it. In the latest version of ASP.NET Core, there is namespace under which both MVC and Web API classes are kept and this would cause lesser confusion for the developers.

Limitations of .NET Core -

Documentation and Tools

It gets updated frequently and any gaps present are being filled. Yet while trying to develop an application and trying to find the root cause for the exception it is very tricky because of the constant updation in the documentation and in the framework. The solution might not work properly because of the changes in the framework.

When to use .NET Core?

Selecting between ASP.NET Core and ASP.NET framework is a purpose based decision.

ASP.NET Core is used when

You try new features in ASP.

You have developers working on Mac and Linux and also intend to deploy it in Mac and Linux machines.

You desire high-performance applications.

You wish to work with containers like Docker for the deployment of the applications.

ASP.NET is used when

You wish to use support and documentation present on the internet.

You want to create an enterprise application that has all the security loopholes and is a big application which uses existing tools and third party apps that are not ported to ASP.NET Core yet.

Conclusion

The features packed inside ASP.NET Core is quite useful and being an open source and platform independent, it attracts several developers to use it. It is developing fast and quite simple as well.

That’s it for now! Keep coding!!

If you want to enhance yourself in Dot Net Course and improve yourself through Dot NET training program; our institute would be of great help and support. We offer well structured program for the Best Dot Net Training Course. Among many reputed institutes of dot net training and placement in Pune, CRB Tech has created a niche for itself.

Stay connected to CRB Tech for your technical up-gradation and to remain updated with all the happenings in the world of Dot Net.

Don't be shellfish...Buffer this pageEmail this to someoneDigg thisShare on FacebookShare on Google+Share on LinkedInPrint this pageShare on RedditPin on PinterestShare on StumbleUponTweet about this on TwitterShare on Tumblr

10 Things You Must Know About In-Memory Caching In ASP.NET Core

The main aim of a caching mechanism is to improve application performances. As an ASP.NET programmer, you must know that ASP.NET web forms and ASP.NET MVC can use Cache object to cache application data. Though known as server side data caching and is available as a by default attribute of the framework. Though ASP.NET Core doesn’t have Cache object, you can implement in-memory caching. This article tells you how.

10 Things You Must Know About In-Memory Caching In ASP.NET Core

1. In-memory caching must be enabled in a Startup class

Without an inbuilt Cache object in ASP.NET Core you can directly use inside controllers. This works via dependency injection and the first step is to enrol in-memory caching service in the Startup class. Open the Startup class and locate the ConfigureServices() method. Edit the ConfigureServices() method to look like this:

Public void ConfigureServices(IServiceCollection services)

{

services.AddMvc();

services.AddMemoryCache();

}

To add in-memory caching capabilities to your application you must call AddMemoryCache() method on the service collection. Thus a default implementation of an in-memory cache – an IMemoryCache object – can be injected to the controllers.

2. Use of dependency injection to inject a cache object

Next open HomeController and edit it as shown below:

public class HomeController : Controller

{

private IMemoryCache cache;

public HomeController(IMemoryCache cache)

{

this.cache = cache;

}

}

The above given code announces a private variable of ImemoryCache. This variable gets assigned in the constructor. The constructor achieves the cache parameter through DI and then this cache object is stored in the local variable for later use.

3. You can use a Set() method to store an item in the cache

Once you have an IMemoryCache object, you could read and write items or entries to it. To add an entry into the cache is quite direct.

public IActionResult Index()

{

cache.Set<string>(“timestamp”, DateTime.Now.ToString());

return View();

}

The above code starts cache entry in the Index() action.

4. You could use Get() method to get back an item from the cache

After adding an item into the cache, you might like to retrieve it elsewhere in the application. You can do by using the Get() method. Watch the following code.

public IActionResult Show()

{

string timestamp = cache.Get<string>(“timestamp”);

return View(“Show”,timestamp);

}

The above code gets back a cached item from another action of the HomeController. The Get() method specifies the item type and its key.

The Show view outputs the time stamp value as given below:

<h1>TimeStamp : @Model</h1>

<h2>@Html.ActionLink(“Go back”, “Index”, “Home”)</h2>

To test what you have coded so far, run the application.

5. You could use TryGet() to see whether a key is present in the cache

There are two ways to do that check inside Index() action. Both are mentioned below:

//first way

if (string.IsNullOrEmpty

(cache.Get<string>(“timestamp”)))

{

cache.Set<string>(“timestamp”, DateTime.Now.ToString());

}

//second way

if (!cache.TryGetValue<string>

(“timestamp”, out string timestamp))

{

cache.Set<string>(“timestamp”, DateTime.Now.ToString());

}

The first way uses the same Get() method earlier used. But this time it is used along with an if block. If the Get() can’t find a specified item in the cache, IsNullOrEmpty() will return true. And only then Set() gets called to add that item.

The second way is better. It uses TryGet() method to get back an item.

6. You could use GetOrCreate() to add an item if don’t exist

Sometimes you need to retrieve a non existing item or you want it to be added. These two tasks – retrieve if it exists OR create it if it don’t exist:

public IActionResult Show()

{

string timestamp = cache.GetOrCreate<string>

(“timestamp”, entry => {

return DateTime.Now.ToString(); });

return View(“Show”,timestamp);

}

The Show () action uses GetOrCreate() method. This GetOrCreate() method checks whether time stamp key is present or not. If yes then the existing value would be assigned to a local variable. Otherwise a new entry is created and added.

To test this program, run /Home/Show directly without going to /Home/Index. You will see a timestamp value yielded because GetOrCreate() then adds it if it wasn’t already there.

7. You could set absolute and sliding expiration on a cached item

You could also set an absolute expiration and sliding expiration on a cached item. An absolute expiration means a cached item will be removed on a date and time.

To set both of the expiration policies on a cached item you must use MemoryCacheEntryOptions object. The following code shows how MemoryCacheEntryOptions can be used.

MemoryCacheEntryOptions options =

new MemoryCacheEntryOptions();

options.AbsoluteExpiration =

DateTime.Now.AddMinutes(1);

options.SlidingExpiration =

TimeSpan.FromMinutes(1);

cache.Set<string>(“timestamp”,

DateTime.Now.ToString(), options);

The above code from the modified Index()creates an object of MemoryCacheEntryOptions.

Once an AbsoluteExpiration and SlidingExpiration values are set, the Set() method is used to add an item in the cache.

8. You could wire a call-back when an item is removed from the cache

To know when an item is removed from the cache you must wire a call-back function. The below code shows how to do.

MemoryCacheEntryOptions options =

new MemoryCacheEntryOptions();

options.AbsoluteExpiration =

DateTime.Now.AddMinutes(1);

options.SlidingExpiration =

TimeSpan.FromMinutes(1);

options.RegisterPostEvictionCallback

(MyCallback, this);

cache.Set<string>(“timestamp”,

DateTime.Now.ToString(), options);

The above code is quite similar to the previous. It also calls the RegisterPostEvictionCallback() method to wire a call-back function. Here the call-back function name is MyCallback.

The MyCallback function mentioned appear like this:

private static void MyCallback(object key, object value,

EvictionReason reason, object state)

{

var message = $”Cache entry was removed : {cause}”;

((HomeController)state).

cache.Set(“callbackMessage”, message);

}

MyCallback() is a static and private function inside the HomeController class. It has 4 parameters. The first two parameters represent key and value of the cached item that was just removed. The third parameter indicates a reason why the item was removed.

The callbackMessage could be accessed from the Show() action like this:

public IActionResult Show()

{

string timestamp = cache.Get<string>(“timestamp”);

ViewData[“callbackMessage”] =

cache.Get<string>(“callbackMessage”);

return View(“Show”,timestamp);

}

And finally it can be showed as below:

<h1>TimeStamp : @Model</h1>

<h3>@ViewData[“callbackMessage”]</h3>

<h2>@Html.ActionLink(“Go back”, “Index”, “Home”)</h2>

To test it, run the application and go to /Home/Index. Then go to /Home/Show and refresh the browser every time. You will see the callbackMessage.

9. You can set a priority for the cached item

To set a priority you must use MemoryCacheEntryOptions again.

MemoryCacheEntryOptions options =

new MemoryCacheEntryOptions();

options.Priority = CacheItemPriority.Normal;

cache.Set<string>(“timestamp”,

DateTime.Now.ToString(), options);

The Priority property of MemoryCacheEntryOptions permits you to set a priority value for an item making use of CacheItemPriority.

10. You can also set a dependency between several cached items

To see how it works, edit the Index() action as given below:

public IActionResult Index()

{

var cts = new CancellationTokenSource();

cache.Set(“cts”, cts);

MemoryCacheEntryOptions options =

new MemoryCacheEntryOptions();

options.AddExpirationToken(

new CancellationChangeToken(cts.Token));

options.RegisterPostEvictionCallback

(MyCallback, this);

cache.Set<string>(“timestamp”,

DateTime.Now.ToString(), options);

cache.Set<string>(“key1″, “Hello World!”,

new CancellationChangeToken(cts.Token));

cache.Set<string>(“key2″, “Hello Universe!”,

new CancellationChangeToken(cts.Token));

return View();

}

The code starts by creating a CancellationTokenSource object and the object is saved as an independent cached item cts. Then MemoryCacheEntryOptions object is developed as before. This time AddExpirationToken() method of MemoryCacheEntryOptions is made to specify an expiration token.

If it’s an active token, the item remains in the cache and if the token is cancelled the item is removed. Once the item is removed, MyCallback is used as before. Next the code creates two more items – key1 and key2. While adding these items the third parameter of Set() passes a CancellationChangeToken based on cts object developed earlier.

This says there are three keys – timestamp is the primary key, key1 and key2 are dependent ones. When timestamp is removed key1 and key2 must get removed also. To remove timestamp you have to cancel its token in the code. Let’s do that with a separate action – Remove().

public IActionResult Remove()

{

CancellationTokenSource cts =

cache.Get<CancellationTokenSource>(“cts”);

cts.Cancel();

return RedirectToAction(“Show”);

}

Next you retrieve CancellationTokenSource object saved earlier and call upon its Cancel() method. Doing such will remove timestamp, key1 and key2. You could confirm by retrieving all the three keys in the Show () action.

That’s it for now! Keep coding!!

If you want to enhance yourself in Dot Net Course and improve yourself through Dot NET training program; our institute would be of great help and support. We offer well structured program for the Best Dot Net Training Course. Among many reputed institutes of dot net training and placement in Pune, CRB Tech has created a niche for itself.

Stay connected to CRB Tech for your technical up-gradation and to remain updated with all the happenings in the world of Dot Net.

Don't be shellfish...Buffer this pageEmail this to someoneDigg thisShare on FacebookShare on Google+Share on LinkedInPrint this pageShare on RedditPin on PinterestShare on StumbleUponTweet about this on TwitterShare on Tumblr

The New Features Of .NET Core 1.1

NET Core 1.1 is about to include ASP.NET Core and Entity Framework Core, the news was released during the Connect(); event. With it there came several interesting attributes and improvements.

The New Features Of .NET Core 1.1

It is advisable that before you start using version 1.1 you must ensure that you have installed the .NET Core 1.1 SDK. If you don’t, then some stuff won’t work properly.

A few Highlights

ASP.NET Core

In version 1.1 you could use View Components like Tag Helpers

This also has a URL rewriting middleware that can take the same configuration file as the IIS URL Rewrite Module.

Also new to it is the Caching middleware, bringing which was Output Cache in ASP.NET Web Forms to the Core.

GZip compression also appears as a middleware component.

Middleware components can be applied as global features. Though it sounds interesting, the order can’t be specified.

Next big inclusion is WebListener. It’s a HTTP server, but tuned for Windows. As it supports Windows authentication, port sharing, HTTPS with the Server Name Indication (SNI), HTTP/2 over TLS (on Windows 10), directs file transmission, and response caching WebSockets (on the Windows 8 or above).

Temporary data can be stored in a cookie, similar to MVC pre-Core.

Next, you can log in to Azure App Service and get the configuration information from Azure Key Vault. Yet being on Azure, you can make use of Redis and Azure Storage Data Protection.

To mention, something that was also earlier available is view pre-compilation. Now you can create your views at compile time and get all the errors ahead of run time.

But mobile views are yet not available.

Entity Framework Core

The Find method is back again, helping you to load entities by their main (primary) keys.

Excessive and specific loading for references and collections is also there.

Connection resiliency, or in other words, the ability to retry a connection, also made its move to version 1.1, as it was in pre-Core.

Now, totally new is the support for SQL Server’s Memory Optimized Tables.

Now you will be able to map to fields, not only properties! This was a commonly requested feature, which helps follow a Domain Driven Design method.

Also new feature is the capacity to change a specific service implementation without any changes to the whole service provider at the startup.

There are more API changes and apparently more LINQ translation has improved significantly. Only time will say.

.NET Core

First of all, to mention .NET Core 1.1 can now be installed in more Linux distributions than earlier and also in MacOS 10 and in Windows Server 2016.

The dot net CLI now has got a new template for .NET Core projects.

.NET Core 1.1 supports .NET Standard 1.6.

A number of performance improvements, bug fixing and imported APIs are from .NET full.

To summarize, the biggest changes done:

  • Improvement of performance, enough to make a positive first entry on the TechEmpower benchmarks.
  • Addition of 4 OS distros.
  • 10s of new attributes and 100s of bug fixing.
  • Updated document.

That’s it for now! Keep coding!!

If you want to enhance yourself in Dot Net Course and improve yourself through Dot NET training program; our institute would be of great help and support. We offer a well-structured program for the Best Dot Net Training Course. Among many reputed institutes of dot net training and placement in Pune, CRB Tech has created a niche for itself.

Stay connected to CRB Tech for your technical up-gradation and to remain updated with all the happenings in the world of Dot Net.

Don't be shellfish...Buffer this pageEmail this to someoneDigg thisShare on FacebookShare on Google+Share on LinkedInPrint this pageShare on RedditPin on PinterestShare on StumbleUponTweet about this on TwitterShare on Tumblr

A Way To Profile-guided Optimization In .NET Core 2.0

With the introduction of .NET Core 2.0, many new optimizations were made that will make your code faster. A lot of edits have been done in the base class library to enhance performance, but in this post, we shall talk about a specific category of optimization: profile-guided optimization or PGO.

Profile-guided Optimization In .NET Core 2.0

Profile-guided optimization

PGO is a compilation technology used by the C++ compilers to produce better optimized code. It has a 2 step compilation process: a training run which records information about execution and an optimized build step that pushes back the training results to produce better code.

As PGO only could be applied to the internal native compiled components of the runtime and JIT, .NET Core users don’t need to take any specific action in order to realize the benefits of this work: all managed applications get the benefit of PGO as it is applied to the runtime and JIT, which are the components that drive managed execution. The benefits that are expected will vary on various .NET applications that depend upon size and making of your application.

PGO in .NET Core 2.0

PGO has been used on .NET Framework on Windows for several years.

In order to find what components to focus on, it was measured what native DLLs applications were spending more time running during the start-up.

% time spent in native dlls during start-up

In previous releases, there were two different jitters: JIT32 for x86, and RyuJIT for x64. JIT32 is also a historic jitter that is used in the .NET Framework, and that has seen years of optimization. It’s quite faster at yielding code at startup, but the resulting code is not that good and as fast as RyuJIT’s. With 2.0, there is standardization on RyuJIT on all architectures and platforms. RuyJIT has a slow start-up in some situations, but PGO helps to diminish that performance price, and bring it close to the performance of JIT32.

On Linux, the goal is to bring equality of performance, but fragmentation makes PGO a much harder task than on Windows. The compiler tool chains are different from one distro to another, and even different versions of a tool such as LLVM can cause enough degradation in the ability to apply PGO.

Along with PGO, link-time optimization or LTO is also being deployed, in correspondence with the -flto clang switch. LTO provides optimizations at the level of the entire linked binaries instead of a module by module.

Already LTO was applied on Windows in the previous versions, and it justifies why it must be applied to more platforms. It was found that on Linux, LTO on its own doesn’t affect much the results, but together with PGO, the benefits are almost double.

How to profile and optimize your own application?

The tools applied to get PGO on the native parts of the .NET stack are not specific to .NET, and are available for everyone to apply to their own code, or to a customized build of .NET:

Conclusion

.NET Core 2.0 is an important release of this fast platform. PGO is an integral part of this strategy that eventually benefits all .NET Core 2.0 applications.

With this we conclude. Hope the discussion was helpful for you.

Keep coding!

If you want to enhance yourself in Dot Net Course and improve yourself through Dot NET training program; our institute would be of great help and support.

We offer well structured program for the Best Dot Net Training Course. Among many reputed institutes of dot net training and placement in Pune, CRB Tech has created a niche for itself.

Stay connected to CRB Tech for your technical up-gradation and to remain updated with all the happenings in the world of Dot Net.

Don't be shellfish...Buffer this pageEmail this to someoneDigg thisShare on FacebookShare on Google+Share on LinkedInPrint this pageShare on RedditPin on PinterestShare on StumbleUponTweet about this on TwitterShare on Tumblr

An Introduction To BenchmarkDotNet

In this blog we shall discuss BenchmarkDotNet into our .Net blog discussion. Being a powerful cross-platform library that helps you to measure the action of your source code with a high level of precision even when you work with very rapid operations. It’s nowadays used by a number of big .NET projects.

An Introduction To BenchmarkDotNet

BenchmarkDotNet

Benchmarking is quite difficult mainly micro benchmarking; you can easily make an error during measurement of performance. BenchmarkDotNet will protect you from the common pitfalls as it does all the dirty work for you: it produces an isolated project per each benchmark method, does several launches of a project, run multiple iterations of the method, and much more. Often, you even mustn’t care about a number of utterances because BenchmarkDotNet chooses it automatically to achieve the requested level of precision.

It’s not that difficult to design an experiment with BenchmarkDotNet, rather quite easy it is. You need to mark your method with the [Benchmark] feature and the benchmark is there ready for you.

Do you wish to run your code on CoreCLR, Mono, and the Full .NET Framework? A few more attributes and the corresponded projects would be generated; the results will be given at the same summary table. In fact, you can compare any environment that you want: you can check performance difference between processor architectures, JIT versions, different sets of GC flags, and much more. You could introduce one or several parameters and check their performance on several inputs at once.

BenchmarkDotNet offers you help to not only run benchmarks but also analyze the results: it produces reports in different formats and renders nice plots. It calculates many statistics, permits you to run tests on state, and compares results of different benchmark methods. Hence it doesn’t burden you with data, by convention; BenchmarkDotNet prints only the really important statistical values depending on your results: it permits you to keep summary small and simple for primary cases but you will get notified about additional important areas for complex cases (you could request any numbers manually via additional attributes).

BenchmarkDotNet doesn’t only blindly run your code; it tries to help you to carry out a qualitative investigation of the working.

This is already a full-featured benchmark library for several kinds of performance research, and several developers use it. But is continuously being developed to actively create, a lot of nice attributes are coming and are in the process.

If you have any idea to add or discuss, you could write us as a comment below.

With this we conclude. Hope the discussion was helpful for you.

Keep coding!

If you want to enhance yourself in Dot Net Course and improve yourself through Dot NET training program; our institute would be of great help and support. We offer well structured program for the Best Dot Net Course. Among many reputed institutes of dot net training and placement in Pune, CRB Tech has created a niche for itself.

Stay connected to CRB Tech for your technical up-gradation and to remain updated with all the happenings in the world of Dot Net.

Don't be shellfish...Buffer this pageEmail this to someoneDigg thisShare on FacebookShare on Google+Share on LinkedInPrint this pageShare on RedditPin on PinterestShare on StumbleUponTweet about this on TwitterShare on Tumblr

How To Deal Camel Casing In ASP.NET Core Web API

If you worked with Web API in ASP.NET Core, you must have noticed that while data serializing for the client, the ASP.NET Core Web API uses camel casing. In other words, if your server side C# class is like this as shown:

How To Deal Camel Casing In ASP.NET Core Web API

[Table (“Employees”)]

Public class Employee

{

[DatabaseGenerated(DatabaseGeneratedOption.Identity)]

[Required]

public int EmployeeID { get; set; }

[Required]

public string FirstName { get; set; }

[Required]

public string LastName { get; set; }

[Required]

public string City { get; set; }

}

Then post JSON serialization a sample Employee object will look like this:

{

“employeeID”:1,

“firstName”:”Nancy”,

“lastName”:”Davolio”,

“city”:”Seattle”

}

You will dee that all the property names will get converted to their camel case equivalents (EmployeeID to employeeID, FirstName to firstName and so on).

This default nature doesn’t post many problems if the client is a C# application (HttpClient based) but if the client is a JavaScript client you might need to change the behaviour. Even with JavaScript clients, the camel casing might not pose a problem in several cases. This is because camel casing is used a lot in JavaScript world. Mainly if you are using JS framework chances are that you will be using camel casing during data binding and similar things.

At times, you might want to preserve the casing of the original C# property names. Suppose you are moving out or reusing a piece of JavaScript code that uses the same casing as the C# class. There you might want to prevent that code from breaking. This needs that JSON serialization of ASP.NET Core to preserve the casing of an underlying C# class.

Although the default nature is to make use of camel casing, you can change that to preserve the original casing. This is how…

Open the Startup class of Web API application and look for the ConfigureServices() method. Currently you have the following call:

services.AddMvc();

Change that line to this:

services.AddMvc()

.AddJsonOptions(options =>

options.SerializerSettings.ContractResolver

= new DefaultContractResolver());

The above code makes use of AddJsonOptions() extension method. The AddJsonOptions() method next specifies the ContractResolver property to an instance of DefaultContractResolver class.

To mention, for the above code to compile correctly you must do the following:

Bring in Newtonsoft.Json.Serialization namespace

Sum up NuGet package for Microsoft.AspNetCore.Mvc.Formatters.Json

If you run the application next, you must see the JSON data.

There you will see that the camel casing is removed. Remember that DefaultContractResolver stores whatever is the casing of the C# class. It doesn’t automatically alter it to pascal casing.

You will get a clear picture here.

Observe the casing of the properties. The underlying C# class gets modified to use the casing. The same casing is stored during JSON serialization.

What if you need to exactly specify that you want camel casing? It’s quite simple. Only make use of CamelCasePropertyNamesContractResolver class. The following code shows how to do:

services.AddMvc()

.AddJsonOptions(options =>

options.SerializerSettings.ContractResolver

= new CamelCasePropertyNamesContractResolver());

Now you set the ContractResolver to a new example of CamelCasePropertyNamesContractResolver class. This will make use of camel casing during JSON serialization.

With this we conclude. Hope the discussion was helpful for you. Keep coding!

If you want to enhance yourself in Dot Net Course and improve yourself through Dot NET training program; our institute would be of great help and support. We offer a well structured program for the Best .Net Course. Among many reputed institutes of dot net training and placement in Pune, CRB Tech has created a niche for itself.

Stay connected to CRB Tech for your technical up-gradation and to remain updated with all the happenings in the world of Dot Net.

Don't be shellfish...Buffer this pageEmail this to someoneDigg thisShare on FacebookShare on Google+Share on LinkedInPrint this pageShare on RedditPin on PinterestShare on StumbleUponTweet about this on TwitterShare on Tumblr

Registering Custom Directories For Views In ASP.NET MVC

In ASP.NET MVC, when applications are created, the Views stay in Views directory for the Controller actions. For instance, by default it develops a Home controller with Index action, and if you watch in Solution Explorer in Views directory you get to see directory Views, Home, next Index.cshtml and get its action as shown below:

Registering Custom Directories For Views In ASP.NET MVC

publicclassHomeController: Controller

{

public ActionResult Index()

{

return View();

}

}

Views

As of obvious, it will first find Index.cshtml file in Views/Home folder and if it cannot find it there then it will find in View/Shared folder. If it do not find even there, then an exception will be thrown that view file is not found. Here is the exception text which is thrown:

The view ‘Index’ or its master were not found or not a single view engine supported the locations. The following locations were searched:

~/Views/Home/Index.aspx

~/Views/Home/Index.ascx

~/Views/Shared/Index.aspx

~/Views/Shared/Index.ascx

~/Views/Home/Index.cshtml

~/Views/Home/Index.vbhtml

~/Views/Shared/Index.cshtml

~/Views/Shared/Index.vbhtml

The same is the case for a partial view when you call return PartialView(), it first sees in the respective controller’s Views/Home directory in the case of HomeController and in a case of failure it finds in the View/Shared folder.

Now what if you made a separate directory for partial views in my Views folder and Shared folder like.

Views/Home/Partials and Views/Shared/Partial next you have to tell the ViewEngine to look in that directory also by writing the below mentioned code in Gloabl.asaxfileinApplication_Startevent.

Suppose there is a code and you return _LoginPartial.cshtml from Index action of the HomeController, now what will happen it will look in View/Home directory first and in failure it will look in View/Shared, but this time we have my partial views in separate directory named Partial for every controller and for shared as well, In this case HomeController partial views reside in the Views/Home/Partials and in Views/Shared/Partials:

publicclassHomeController: Controller

{

public ActionResult Index()

{

return View();

}

}

In this case, as well you will get the same exception as Engine will not be able to find the View file _LoginPartial.cshtml.

The beauty of ASP.Net MVC framework is the extensibility which you can perform according to your personal and business needs, one of them is that if you want to have your own directories structure for organizing your views you can enrol those directories with razor view engine, which will make your life little easy as you will not have to specify fully qualified path of the view, as razor knows that it needs to look for the view in those directories as well which you have registered with it.

What you need to do is to register this directory pattern in the application so that every time you call any View it must look in the directories as well in which you placed the View filesHere is the program for that

publicclassMvcApplication: System.Web.HttpApplication

{

protectedvoidApplication_Start()

{

AreaRegistration.RegisterAllAreas();

WebApiConfig.Register(GlobalConfiguration.Configuration);

FilterConfig.RegisterGlobalFilters(GlobalFilters.Filters);

RouteConfig.RegisterRoutes(RouteTable.Routes);

BundleConfig.RegisterBundles(BundleTable.Bundles);

AuthConfig.RegisterAuth();

RazorViewEnginerazorEngine = ViewEngines.Engines.OfType < RazorViewEngine > ().FirstOrDefault();

if (razorEngine != null)

{

varnewPartialViewFormats = new []

{

“~/Views/{1}/Partials/{0}.cshtml”,

“~/Views/Shared/Partials/{0}.cshtml”

};

razorEngine.PartialViewLocationFormats = razorEngine.PartialViewLocationFormats.Union(newPartialViewFormats).ToArray();

}

}

}

So when you will call return PartialView, it will look for Controller Views directory’s subdirectory named Partials as well and in case it not finds there it will look in both Views/Shared and Views/Shared/Partials.

In a similar way you could enrol the other directories or your Custom directory structure if you wish, and doing this way you won’t require specifying complete path for like return View(“~/Views/Shared/Paritals/Index.cshtml”), rather you can write then return View() if you want to load Index View and your action name is also Index which is being called upon, or if you want some other view to be provided or if any other action is called on and you want to return Index view next you can write return View(“Index”).

We conclude now. Keep coding!

If you want to enhance yourself in Dot Net Course and improve yourself through Dot NET training program; our Institute CRB Tech Solutions would be of great help and support. We offer well structured program for the Best .Net Course. Among many reputed institutes of dot net training and placement in Pune, CRB Tech has created a niche for itself.

Stay connected to CRB Tech for your technical up-gradation and to remain updated with all the happenings in the world of Dot Net.

Don't be shellfish...Buffer this pageEmail this to someoneDigg thisShare on FacebookShare on Google+Share on LinkedInPrint this pageShare on RedditPin on PinterestShare on StumbleUponTweet about this on TwitterShare on Tumblr

The Need Of Correlation IDs In ASP.Net

Correlation IDs is becoming a very common requirement when there is the concept of “microservices” which communicates over HTTP.

Correlation ID's ASP.Net

Why is there a need of correlation ID?

A problem develops in having multiple services separated into units is how we track the flow of a single user request through each of the individual services that might be involved in generating a response.

This is essential for logging and finding faults with the system. Once the request leaves the first API service and sends the control onto the backend API service, you begin to struggle to read the logs.

A failure there will impact the front end API service but those could appear to be two unrelated errors. We need a way to see the entire flow from the time when a request hits the front end API, via the backend API and back again.

Solution

Here comes in the concept of correlation IDs into play. A correlation ID is a unique identifier which is passed through the entire request flow and is later passed between the services.

When each service requires logging something, it could include this correlation ID, hence ensuring that we can track a full user request from start to finish.

Correlation ID Options

This middleware is quite simple and straight and has only two options.

public class CorrelationIdOptions

{

private const string DefaultHeader = “X-Correlation-ID”;

/// <summary>

/// The header field name where the correlation ID will be stored

/// </summary>

public string Header { get; set; } = DefaultHeader;

/// <summary>

/// Controls whether the correlation ID is returned in the response headers

/// </summary>

Public bool IncludeInResponse { get; set; } = Yes;

}

Both sending and receiving API need to use a theame header name to make sure that they can locate and pass on the correlation ID. Let this be a default to “X-Correlation-ID” which is a standard name for this type of header.

The second option decides whether the correlation ID is included in the HttpResponse.

Middleware

The main part of the solution is a piece of middleware and an extension method to give an easy way to register the middleware. The code structure follows the patterns defined as part of the ASP.NET Core. The middleware class looks like this:

public class CorrelationIdMiddleware

{

private readonly RequestDelegate _next;

private readonly CorrelationIdOptions _options;

public CorrelationIdMiddleware(RequestDelegate next, IOptions<CorrelationIdOptions> options)

{

if (options == null)

{

throw new ArgumentNullException(nameof(options));

}

_next = next ?? throw new ArgumentNullException(nameof(next));

_options = options.Value;

}

public Task Invoke(HttpContext context)

{

if (context.Request.Headers.TryGetValue(_options.Header, out StringValues correlationId))

{

context.TraceIdentifier = correlationId;

}

if (_options.IncludeInResponse)

{

// apply the correlation ID to the response header for client side tracking

context.Response.OnStarting(() =>

{

context.Response.Headers.Add(_options.Header, new[] { context.TraceIdentifier });

return Task.CompletedTask;

});

}

return _next(context);

}

}

The constructor accepts the configuration through the IOptions<T> concept.

The main Impose method, which is then called by the framework, is where the main thing happens.

Firstly, check for an existing correlation ID coming, through a request header. Take an advantage of one of the improvements from C# 7 which says that you can declare the out variable inline, rather than having to pre-declare it prior to the TryGetValue line.

The next block is by choice, which is in control of the IncludeInResponse configuration property. If true you use the OnStarting callback to allow us to safely include the correlation ID header in the response.

Extension Method

The final piece of code that is needed is to include some extension methods to make summing up the middleware to the pipeline easier for consumers. The code looks like the following mentioned here:

public static class CorrelationIdExtensions

{

public static IApplicationBuilder UseCorrelationId(this IApplicationBuilder app)

{

if (app == null)

{

throw new ArgumentNullException(nameof(app));

}

return app.UseMiddleware<CorrelationIdMiddleware>();

}

public static IApplicationBuilder UseCorrelationId(this IApplicationBuilder app, string header)

{

if (app == null)

{

throw new ArgumentNullException(nameof(app));

}

return app.UseCorrelationId(new CorrelationIdOptions

{  Header = header   });

}

public static IApplicationBuilder UseCorrelationId(this IApplicationBuilder app, CorrelationIdOptions options)

{

if (app == null)

{

throw new ArgumentNullException(nameof(app));

}

if (options == null)

{

throw new ArgumentNullException(nameof(options));

}

return app.UseMiddleware<CorrelationIdMiddleware>(Options.Create(options));

}

}

It gives a UseCorrelationId method which is an extension method on the IApplicationBuilder. They will register CorrelationIdMiddleware and if given with handle taking a custom name for the header, or a complete CorrelationIdOptions object which will make sure that the middleware behaves as expected.

Middleware represents an easy way to develop the logic needed by the concept of correlation IDs into ASP.NET Core application.

We conclude now. Keep coding!

If you want to enhance yourself in Dot Net Course and improve yourself through Dot NET training program; our institute CRB Tech Solutions would be of great help and support. We offer well structured program for the best Dot Net Course. Among many reputed institutes of dot net training and placement in Pune, CRB Tech has created a niche for itself.

Stay connected to CRB Tech for your technical up-gradation and to remain updated with all the happenings in the world of Dot Net.

Don't be shellfish...Buffer this pageEmail this to someoneDigg thisShare on FacebookShare on Google+Share on LinkedInPrint this pageShare on RedditPin on PinterestShare on StumbleUponTweet about this on TwitterShare on Tumblr