To Cache or not to Cache

..that is the question—
Whether ’tis Nobler in the mind to suffer
The Slings and Arrows of outrageous Fortune, Or to take Arms against a Sea of troubles,
Selecting caching & no sql solutions is no joke & I may be basing my pick based on this graphic from Perfect Market and the comments of Jun Xu. (I am comparing Redis vs MongoDB)Redis is an excellent caching solution and we almost adopted it in our system. Redis stores the whole hash table in memory and has a background thread that saves a snapshot of the hash table onto the disk based on a preset time interval. If the system is rebooted, it can load the snapshot from disk into memory and have the cache warmed at startup. It takes a couple of minutes to restore 20GB of data depending on your disk speed. This is a great idea and Redis was a decent implementation.
But for our use-cases it did not fit well. The background saving process still bothered me, especially when the hash table got bigger. I had a fear that it may negatively impact read speed. Using logging style persistence instead of saving the whole snapshot could mitigate the impact of these dig dumps, but the data size will be bloated if frequently, which eventually may negatively affect restore time. The single-threaded model does not sound that scalable either, although, in my testing, it scaled pretty well horizontally with a few hundred concurrent reads.
Another thing that bothered me with Redis was that the whole data set must fit into physical memory. It would not be easy to manage this in our diversified environment in different phases of the product lifecycle. Redis’ recent release on VM might mitigate this problem though.

MongoDB is by far the solution I love the most, among all the solutions I have evaluated, and was the winner out of the evaluation process and is currently used in our platform.
MongoDB provides distinct and superior insertion speed probably due to deferred writes and fast file extension with multiple files per collection structure. As long as you give enough memory to your box, hundred of millions of rows can be inserted in hours, not days. I would post exact numbers here but it would be too specific to be useful. But trust me — MongoDB offers very fast bulk inserts.
MongoDB uses memory mapped files and usually it takes only nanoseconds to resolve minor page faults to get file system cached pages mapped into MongoDB’s memory space. Compared to other solutions, MongoDB will not compete with page cache since they are same memory for read-only blocks. With other solutions, if you allocate too much memory for the tool itself, then the box may fall short on page cache, and usually it’s not easy or there may not be an efficient way to have the tool’s cache fully pre-warmed (you definitely don’t want to read every row beforehand!).
For MongoDB, it’s very easy to do some simple tricks (copy, cat or whatever) to have all data loaded in page cache. Once in that state, MongoDB is just like Redis, which performs super well on random reads.
In one of the tests I did, MongoDB showed overall 400,000 QPS with 200 concurrent clients doing constant random reads on a large data set (hundred millions of rows). In the test, data was pre-warmed in page cache. In later tests, MongoDB also showed great random read speed under moderate write load. For a relatively big payload, we compress it and then save it in MongoDB to further reduce data size so more stuff can fit into memory.
MongoDB provides a handy client (similar to MySQL’s) which is very easy to use. It also provides advanced query features, and features for handling big documents, but we don’t use any of them. MongoDB is very stable and almost zero maintenance, except you may need to monitor memory usage when data grows. MongoDB has rich client support in different languages, which makes it very easy to use. I will not go through the laundry list here but I think you get the point.

A better DI with Autofac Repositories

I have most recently been using Structure map DI but I would have to say Autofac would be the DI choice for licensing, ease of setup & performance.

Both achieve keeping Data Access out of the controller but the difference is in the instantiation and the ability to Unit Test & mock.

Autofac is free and doesn’t have these shortcomings. So it’s an easy choice for MVC.

Setting up is easy:

___________________________________________
For Interfaces: (From Interfaces Project or Folder)

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using ClientRepository.Data;

namespace ClientRepository.Interfaces
{
public interface ICustomerRepository
{
IEnumerable SelectAll();
Customer SelectByID(string id);
void Insert(Customer obj);
void Delete(string id);
void Save();
}
}

___________________________________________

For Data Access: (From Data Project)
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Data.Entity;
using ClientRepository.Interfaces;

namespace ClientRepository.Data
{

public class CustomerRepository : ICustomerRepository
{
IClientDB_DBEntities ClientDBContext;
public CustomerRepository(IClientDB_DBEntities db)
{
ClientDBContext = db;
}
public IEnumerable SelectAll()
{
return ClientDBContext.Customers.ToList();
}

public Customer SelectByID(string id)
{
return ClientDBContext.Customers.Find(id);
}

public void Insert(Customer obj)
{
ClientDBContext.Customers.Add(obj);
}
public void Delete(string id)
{
var value = ClientDBContext.Customers.Where(i => i.CustomerID == id).FirstOrDefault();
ClientDBContext.Customers.Remove(value);
}
public void Save()
{
ClientDBContext.SaveChanges();
}
}
}

___________________________________________
For Data Access: (Register Repositories)

namespace ClientRepository.Data
{
public static class AutofacConfig
{
public static void RegisterComponents()
{
var builder = new ContainerBuilder();
builder.RegisterType().As<ICustomerRepository>();
builder.RegisterType<CustomerController>();
builder.RegisterType<ClientDB_DBEntities>().As<IClientDB_DBEntities>();
var container = builder.Build();
DependencyResolver.SetResolver(new AutofacDependencyResolver(container));
}

}
}

___________________________________________

for the Web Project : Clean implementation in the controller

using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.Mvc;
using ClientRepository.Interfaces;
using ClientRepository.Data;

namespace RepositoryPattern.Controllers
{
public class CustomerController : Controller
{
ICustomerRepository _CustomerRepo;

public CustomerController(ICustomerRepository customerRepo)
{
_CustomerRepo = customerRepo;
}

//
// GET: /Customer/
public ActionResult Index()
{
List<Customer> model = (List<Customer>)_CustomerRepo.SelectAll();
return View(model);
}

public ActionResult Insert(Customer obj)
{
_CustomerRepo.Insert(obj);
_CustomerRepo.Save();
return View();
}

public ActionResult Edit(string id)
{
Customer existing = _CustomerRepo.SelectByID(id);
return View(existing);
}

public ActionResult ConfirmDelete(string id)
{
Customer existing = _CustomerRepo.SelectByID(id);
return View(existing);
}

public ActionResult Delete(string id)
{
_CustomerRepo.Delete(id);
_CustomerRepo.Save();
return View();
}

}
}

and finally your Models are decluttered by accessing from the Data Access Layer

@model List<ClientRepository.Data.Customer>

Quick Exercise in Dependency Injection

I like the reusability with DI
& once implemented it really makes for cleaner code out on the front end.

Here is my SalesPlanning report which checks if sales Threshold has been reached.

public interface ISalesReport
{
bool AchievedSales();
}

public class SalesReport : ISalesReport
{
public bool AchievedSales()
{
return DTX.AreSalesAboveQuota(); //A boolean result for Threshold reached
}
}

public class SalesPlanning
{
private ISalesReport _sales;

public SalesPlanning(ISalesReport )
{
_sales = sales;
}

public void CheckSales()
{
bool result = _sales.AchievedSales();
if (result)
Console.WriteLine(‘Ring the Bell’);
else
Console.WriteLine(‘Play Cold Winter Breeze MP3′);
}
}
class Program
{
static void Main(string[] args)
{
SalesPlanning _planning = new SalesPlanning(new SalesReport());
_planning.CheckSales();
}
}

Rad Notifications Toast

And yet another awesome Rad Control.Not sure how long this one has been around but I needed something similar to Toast windows I have been creating in Jquery for literally years now, Well or course the Telerik team has neatly bundled this up into a control that is accessible from server, client, cloud, and travelling comet. I am talking about the Rad Notification.

An event notification window that usually requires quite a bit of code due to all of your interdependencies and really on what you are trying to do. My version was hybrid of a code snippet from jquery from years ago. To deploy this into a web page I will write a service or handler expecting events and in the calling pages (client side mouse events) reference 3 JavaScript files, 3 images. On the server side of the page I would build up the launching window script programmatically. The built up script allows for reusability throughout the project once implemented. So things like Toast window size,icons, Header Text, message body & Time to Live are all passed in and return your choice of actual control or a string that is the launching script. Really a nice solution in a generic page scenario. Of course not my scenario today as I have just way to many things going on in my current application with a lot of older ajax panels interfering etc. So I said who would have already built something like this and thinks the same way I do of packaging things up neatly. Why Telerik of course.

So on to “Story board” I am tracking pdfs loaded in another hybrid pdf viewer below. As the pages are viewed I check / mark the pages complete. Once I reach certain thresholds I update the user profile.
prog3
Nice so far but when I am done here I initially go for the old school “return confirm”
~ Congrats you’re done etc.

The problem with this web 1.0 is that any time an event is fired this window can reappear or even worse refuse to leave and even though you are finished reading the file … well you get the idea. Its beyond annoying.

Here is where the Toast style notification  is perfect.
We notify the user and allow them to continue with business.

Problem: Event notifications annoying and potential errors.

Solution: Toolbar / Sidebar notification

Attempt 1:
(My Jquery Growler)
New solution cannot be implemented as client side scripts need to call server to be scaffolded and then
be written in parent window and on and on….. So my old 2.0 solution would have been great
but we to elevate this where we treat the events as nothing more than a message chain.

Attempt 2:

Telerik RadNotification:
Great my project team is already using Telerik and they have the latest libraries for ASPX.
I have a working prototype in a few minutes.


function CallClientShow() {

var notification = $find(“<%=RadNotification1.ClientID %>”);
//notification.show();
window.parent.notification.show();

}

done
Nice.!! my window fades in to top right corner and all of the settings are configurable
from within visual studio. Less hassle for future developers, myself and my reusability is maintained !!

 

 

Apples Oranges Bananas and Watermelon

Listening in on some conversation this morning to my heroes of code and these languages came up:

opa – http://opalang.org

ember – emberjs.com

dart – https://www.dartlang.org

typescript – http://www.typescriptlang.org

Typescript I know and I have opted out on but OPA

looks like a not so usual suspect and promising. Ember has

been burning our ears but I never seem to dive in. Definitely will give it a go this week.

How about you hackers???

Matt

The elusive 500 error ?#@!

500 Server error means that a script has thrown an error, this is not a broken link

Set you Browser to show friendly http errors so you can debug the script.

Then turn on your favorite Tracing tool.

But for a real clue in an MVC application you may need to handle those Route Requests….

Happy Coding

 

GT 500 instead :-D