Multiple File Upload w/ Compression

This is an extension of some great CSS which had no actual functionality other than a progress bar.

So I tapped into some previous uploaders I  had created and came up with this.

Compression was on the table so I threw that in too.

AssetUploader

     

The C#

  private Boolean keepOriginal = true;

        private Boolean PerformUploadComplete()

        {

  

            UploadDetail Upload = new UploadDetail { IsReady = false };

 

            //if request compression >>> Compress and return result

 

            if (Request[“remove”] != null && System.Convert.ToBoolean(Request[“remove”]))

            {

                keepOriginal = false;

            }

 

            //Let the webservie know that we are not yet ready

            // Upload.IsReady = false;

            if (Request.Files[0] != null && Request.Files[0].ContentLength > 0)

            {

 

                string _creator = String.Empty;

 

                //build the local path where upload all the files

                string path = Server.MapPath(“~/PDF”);

                string fileName = Path.GetFileName(Request.Files[0].FileName);

                

 

                //Build the strucutre and stuff it into DTO

                Upload.Creator = _creator;

                Upload.ContentLength = Request.Files[0].ContentLength;

                Upload.FileName = fileName;

                Upload.UploadedLength = 0;

                //Let the polling process know that we are done initializing …

                Upload.IsReady = true;

 

                //set the buffer size to something larger.

                //the smaller the buffer the longer it will take to download,

                //but the more precise your progress bar will be.

                int bufferSize = 1;

                byte[] buffer = new byte[bufferSize];

                try {

                //Writing the byte to disk

                using (FileStream fs = new FileStream(Path.Combine(path, fileName), FileMode.Create))

                {

                    //As long was we haven’t written everything …

                    while (Upload.UploadedLength < Upload.ContentLength)

                    {

                        //Fill the buffer from the input stream

                        int bytes = Request.Files[0].InputStream.Read(buffer, 0, bufferSize);

                        //Writing the bytes to the file stream

                        fs.Write(buffer, 0, bytes);

                        //Update the number the webservice is polling on to the session

                        Upload.UploadedLength += bytes;

                    }

                }

 

                if(IsCompressed(fileName, path)){ this.litScriptUpdate.Text = “// <![CDATA[
function(){  notify(‘”+ fileName + ” Compressed successfully’); }
// ]]>”; }

                }

                catch (Exception exmsg)

                {

                    return false;

                }

              if (!keepOriginal) { DeleteFile(fileName, path); }

            }         

          return Upload.IsReady;

        }

 The Bloody Html

<html xmlns=”http://www.w3.org/1999/xhtml“>

<head runat=”server”>

    <title></title>

</head>

<body>

    <form id=”form” runat=”server” enctype=”multipart/form-data”>

    </form>

  <zcript>

      

        function notify(message) {

           // alert(message);

           jqXHR.creator.document.getElementById(‘status2’).innerHTML(message);

           jqXHR.creator.document.bgColor = ‘lightgreen’;

        }

    </zript>

     <asp:Literal ID=”litScriptUpdate” runat=”server”></asp:Literal>

</body>

</html>

Robocopy …know your switches

I recently had a migration where I ran into encryption FooBar when running Robo.

Here is how I resolved & all of the Robocopy commands.

Happy Migrations to you! Until … we ..code… again !!

Switches and what they are:
Copy options :

/S :: copy Subdirectories, but not empty ones.
/E :: copy subdirectories, including Empty ones.
/LEV:n :: only copy the top n LEVels of the source directory tree.

/Z :: copy files in restartable mode.
/B :: copy files in Backup mode.
/ZB :: use restartable mode; if access denied use Backup mode.
            /EFSRAW :: copy all encrypted files in EFS RAW mode.

So my command for Network transfer ooks like this :
robocopy \\MACHINEA\c$\stuff \\MACHINEB\c$\clones /R:0 /XD $tf /E /EFSRAW

& the other commands

/COPY:copyflag[s] :: what to COPY for files (default is /COPY:DAT).
(copyflags : D=Data, A=Attributes, T=Timestamps).
(S=Security=NTFS ACLs, O=Owner info, U=aUditing info).

/DCOPY:T :: COPY Directory Timestamps.

/SEC :: copy files with SECurity (equivalent to /COPY:DATS).
/COPYALL :: COPY ALL file info (equivalent to /COPY:DATSOU).
/NOCOPY :: COPY NO file info (useful with /PURGE).

/SECFIX :: FIX file SECurity on all files, even skipped files.
/TIMFIX :: FIX file TIMes on all files, even skipped files.

/PURGE :: delete dest files/dirs that no longer exist in source.
/MIR :: MIRror a directory tree (equivalent to /E plus /PURGE).

/MOV :: MOVe files (delete from source after copying).
/MOVE :: MOVE files AND dirs (delete from source after copying).

/A+:[RASHCNET] :: add the given Attributes to copied files.
/A-:[RASHCNET] :: remove the given Attributes from copied files.

/CREATE :: CREATE directory tree and zero-length files only.
/FAT :: create destination files using 8.3 FAT file names only.
/256 :: turn off very long path (> 256 characters) support.

/MON:n :: MONitor source; run again when more than n changes seen.
/MOT:m :: MOnitor source; run again in m minutes Time, if changed.

/RH:hhmm-hhmm :: Run Hours – times when new copies may be started.
/PF :: check run hours on a Per File (not per pass) basis.

/IPG:n :: Inter-Packet Gap (ms), to free bandwidth on slow lines.

/SL:: copy symbolic links versus the target.

File Selection Options :

/A :: copy only files with the Archive attribute set.
/M :: copy only files with the Archive attribute and reset it.
/IA:[RASHCNETO] :: Include only files with any of the given Attributes set.
/XA:[RASHCNETO] :: eXclude files with any of the given Attributes set.

/XF file [file]… :: eXclude Files matching given names/paths/wildcards.
/XD dirs [dirs]… :: eXclude Directories matching given names/paths.

/XC :: eXclude Changed files.
/XN :: eXclude Newer files.
/XO :: eXclude Older files.
/XX :: eXclude eXtra files and directories.
/XL :: eXclude Lonely files and directories.
/IS :: Include Same files.
/IT :: Include Tweaked files.

/MAX:n :: MAXimum file size – exclude files bigger than n bytes.
/MIN:n :: MINimum file size – exclude files smaller than n bytes.

/MAXAGE:n :: MAXimum file AGE – exclude files older than n days/date.
/MINAGE:n :: MINimum file AGE – exclude files newer than n days/date.
/MAXLAD:n :: MAXimum Last Access Date – exclude files unused since n.
/MINLAD:n :: MINimum Last Access Date – exclude files used since n.
(If n < 1900 then n = n days, else n = YYYYMMDD date).

/XJ :: eXclude Junction points. (normally included by default).

/FFT :: assume FAT File Times (2-second granularity).
/DST :: compensate for one-hour DST time differences.

/XJD :: eXclude Junction points for Directories.
/XJF :: eXclude Junction points for Files.

Retry Options :

/R:n :: number of Retries on failed copies: default 1 million.
/W:n :: Wait time between retries: default is 30 seconds.

/REG :: Save /R:n and /W:n in the Registry as default settings.

/TBD :: wait for sharenames To Be Defined (retry error 67).

Logging Options :

/L :: List only – don’t copy, timestamp or delete any files.
/X :: report all eXtra files, not just those selected.
/V :: produce Verbose output, showing skipped files.
/TS :: include source file Time Stamps in the output.
/FP :: include Full Pathname of files in the output.
/BYTES :: Print sizes as bytes.

/NS :: No Size – don’t log file sizes.
/NC :: No Class – don’t log file classes.
/NFL :: No File List – don’t log file names.
/NDL :: No Directory List – don’t log directory names.

/NP :: No Progress – don’t display % copied.
/ETA :: show Estimated Time of Arrival of copied files.

/LOG:file :: output status to LOG file (overwrite existing log).
/LOG+:file :: output status to LOG file (append to existing log).

/UNILOG:file :: output status to LOG file as UNICODE (overwrite existing log).
/UNILOG+:file :: output status to LOG file as UNICODE (append to existing log).

/TEE :: output to console window, as well as the log file.

/NJH :: No Job Header.
/NJS :: No Job Summary.

/UNICODE :: output status as UNICODE.
Job Options :

/JOB:jobname :: take parameters from the named JOB file.
/SAVE:jobname :: SAVE parameters to the named job file
/QUIT :: QUIT after processing command line (to view parameters).
/NOSD :: NO Source Directory is specified.
/NODD :: NO Destination Directory is specified.
/IF :: Include the following Files.

The specified file could not be encrypted. {SOLVED}

The specified file could not be encrypted.

The solution turned out to be pretty simple:

1. Go to each file (or folder) giving you issues

2. Right click and select Properties

3. Hit the Advanced button

4. Uncheck “Encrypt contents to secure data”

5. Click OK, then OK on the other dialog (or APPLY)

6. Open (or preview) any file in the folder giving you issues

So why does this happen?

If Windows is storing your files in encrypted form, when the publish process attempts to copy the file, it tries to encrypt the file again and you get this error.

After clearing the encryption from the files, I was still experiencing the issue. It then occurred to me that the files might not get “decrypted” until I open them again which seems to be the case.

Eternal load of Toolbox Fix Visual Studio 2013

 

  • Close Visual Studio;
  • Open the “c:\Users\\AppData\Local\Microsoft\VisualStudio\10.0” (Windows 7) folder and remove all the .TBD files; (10.0 is VS 2010, 12 is VS 2013 etc.)
  • Run the “regedit” tool. For this click the “Run” item in the Start menu and type “regedit” without quotation marks;
  • Find the “HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\10.0\ToolboxControlsInstaller_AssemblyFoldersExCache” and “HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\10.0\ToolboxControlsInstallerCache” keys;
  • Remove everything from these keys leaving them empty; (delete all folders under ToolboxControlsInstaller_AssemblyFoldersExCache and ToolboxControlsInstallerCache)
  • Run Visual

 

To Cache or not to Cache

..that is the question—
Whether ’tis Nobler in the mind to suffer
The Slings and Arrows of outrageous Fortune, Or to take Arms against a Sea of troubles,
Selecting caching & no sql solutions is no joke & I may be basing my pick based on this graphic from Perfect Market and the comments of Jun Xu. (I am comparing Redis vs MongoDB)Redis is an excellent caching solution and we almost adopted it in our system. Redis stores the whole hash table in memory and has a background thread that saves a snapshot of the hash table onto the disk based on a preset time interval. If the system is rebooted, it can load the snapshot from disk into memory and have the cache warmed at startup. It takes a couple of minutes to restore 20GB of data depending on your disk speed. This is a great idea and Redis was a decent implementation.
But for our use-cases it did not fit well. The background saving process still bothered me, especially when the hash table got bigger. I had a fear that it may negatively impact read speed. Using logging style persistence instead of saving the whole snapshot could mitigate the impact of these dig dumps, but the data size will be bloated if frequently, which eventually may negatively affect restore time. The single-threaded model does not sound that scalable either, although, in my testing, it scaled pretty well horizontally with a few hundred concurrent reads.
Another thing that bothered me with Redis was that the whole data set must fit into physical memory. It would not be easy to manage this in our diversified environment in different phases of the product lifecycle. Redis’ recent release on VM might mitigate this problem though.

MongoDB is by far the solution I love the most, among all the solutions I have evaluated, and was the winner out of the evaluation process and is currently used in our platform.
MongoDB provides distinct and superior insertion speed probably due to deferred writes and fast file extension with multiple files per collection structure. As long as you give enough memory to your box, hundred of millions of rows can be inserted in hours, not days. I would post exact numbers here but it would be too specific to be useful. But trust me — MongoDB offers very fast bulk inserts.
MongoDB uses memory mapped files and usually it takes only nanoseconds to resolve minor page faults to get file system cached pages mapped into MongoDB’s memory space. Compared to other solutions, MongoDB will not compete with page cache since they are same memory for read-only blocks. With other solutions, if you allocate too much memory for the tool itself, then the box may fall short on page cache, and usually it’s not easy or there may not be an efficient way to have the tool’s cache fully pre-warmed (you definitely don’t want to read every row beforehand!).
For MongoDB, it’s very easy to do some simple tricks (copy, cat or whatever) to have all data loaded in page cache. Once in that state, MongoDB is just like Redis, which performs super well on random reads.
In one of the tests I did, MongoDB showed overall 400,000 QPS with 200 concurrent clients doing constant random reads on a large data set (hundred millions of rows). In the test, data was pre-warmed in page cache. In later tests, MongoDB also showed great random read speed under moderate write load. For a relatively big payload, we compress it and then save it in MongoDB to further reduce data size so more stuff can fit into memory.
MongoDB provides a handy client (similar to MySQL’s) which is very easy to use. It also provides advanced query features, and features for handling big documents, but we don’t use any of them. MongoDB is very stable and almost zero maintenance, except you may need to monitor memory usage when data grows. MongoDB has rich client support in different languages, which makes it very easy to use. I will not go through the laundry list here but I think you get the point.

A better DI with Autofac Repositories

I have most recently been using Structure map DI but I would have to say Autofac would be the DI choice for licensing, ease of setup & performance.

Both achieve keeping Data Access out of the controller but the difference is in the instantiation and the ability to Unit Test & mock.

Autofac is free and doesn’t have these shortcomings. So it’s an easy choice for MVC.

Setting up is easy:

___________________________________________
For Interfaces: (From Interfaces Project or Folder)

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using ClientRepository.Data;

namespace ClientRepository.Interfaces
{
public interface ICustomerRepository
{
IEnumerable SelectAll();
Customer SelectByID(string id);
void Insert(Customer obj);
void Delete(string id);
void Save();
}
}

___________________________________________

For Data Access: (From Data Project)
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Data.Entity;
using ClientRepository.Interfaces;

namespace ClientRepository.Data
{

public class CustomerRepository : ICustomerRepository
{
IClientDB_DBEntities ClientDBContext;
public CustomerRepository(IClientDB_DBEntities db)
{
ClientDBContext = db;
}
public IEnumerable SelectAll()
{
return ClientDBContext.Customers.ToList();
}

public Customer SelectByID(string id)
{
return ClientDBContext.Customers.Find(id);
}

public void Insert(Customer obj)
{
ClientDBContext.Customers.Add(obj);
}
public void Delete(string id)
{
var value = ClientDBContext.Customers.Where(i => i.CustomerID == id).FirstOrDefault();
ClientDBContext.Customers.Remove(value);
}
public void Save()
{
ClientDBContext.SaveChanges();
}
}
}

___________________________________________
For Data Access: (Register Repositories)

namespace ClientRepository.Data
{
public static class AutofacConfig
{
public static void RegisterComponents()
{
var builder = new ContainerBuilder();
builder.RegisterType().As<ICustomerRepository>();
builder.RegisterType<CustomerController>();
builder.RegisterType<ClientDB_DBEntities>().As<IClientDB_DBEntities>();
var container = builder.Build();
DependencyResolver.SetResolver(new AutofacDependencyResolver(container));
}

}
}

___________________________________________

for the Web Project : Clean implementation in the controller

using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.Mvc;
using ClientRepository.Interfaces;
using ClientRepository.Data;

namespace RepositoryPattern.Controllers
{
public class CustomerController : Controller
{
ICustomerRepository _CustomerRepo;

public CustomerController(ICustomerRepository customerRepo)
{
_CustomerRepo = customerRepo;
}

//
// GET: /Customer/
public ActionResult Index()
{
List<Customer> model = (List<Customer>)_CustomerRepo.SelectAll();
return View(model);
}

public ActionResult Insert(Customer obj)
{
_CustomerRepo.Insert(obj);
_CustomerRepo.Save();
return View();
}

public ActionResult Edit(string id)
{
Customer existing = _CustomerRepo.SelectByID(id);
return View(existing);
}

public ActionResult ConfirmDelete(string id)
{
Customer existing = _CustomerRepo.SelectByID(id);
return View(existing);
}

public ActionResult Delete(string id)
{
_CustomerRepo.Delete(id);
_CustomerRepo.Save();
return View();
}

}
}

and finally your Models are decluttered by accessing from the Data Access Layer

@model List<ClientRepository.Data.Customer>