The High-Governance of No-Framework

With evidence in hand, no-framework is the perfect alternative to frameworks.


With a completed and working no-framework application, we can now revisit the arguments made against the no-framework development strategy in Part 1 of the series. Through the use of high governance, no-framework development is not only possible but a desirable development strategy. But what is “high governance”? High governance the measure of your ability to leverage your experience and knowledge; to retain control and make sound decisions. By using a framework, you are making a choice to surrender some degree governance to a third party. This is not a flawed decision, so long as the benefits offset the loss of governance. But all too often and usually encountered during the most inopportune times, the framework impedes development. This is because all frameworks are based on assumptions and all assumptions leak. So, not only did you surrender governance but in addition have to work around impediments caused by leaky assumptions.

Background

Part 1 of this three-part series explores the background, motivation, and architectural approach to developing a no-framework application.   Part 2 presents the implementation of a no-framework application.  Part 3 rebuts the arguments made against no-framework application development.

Rebuttals

Armed with a working application, we can better refute the arguments made against implementing a no- framework solution which is the inspiration for this article. While it is true that frameworks perform much of the heavy lifting, usually there are more associated costs. There is an increased cost to a developer’s learning curve, additional proprietary extensions, reduced transparency, vendor lock-in, and even worse, version lock-in. Relinquishing control of the programming model or breaking the architecture to fit a proprietary framework are both costly in the long run.

Data Binding

The argument against direct DOM manipulation is a matter of governance. Data binding often uses a proprietary layer to perform the DOM manipulation, but the code is typically opaque to developers. Ideally, designers can change the presentation without affecting the view code. But in reality, a designer’s knowledge is limited to valid HTML. They do not insert the declarative binding extensions, so developers still have to add the data binding extensions to the HTML.

Through the use of data binding in MVVM, there is enforcement whereby only the view can manipulate the DOM. MVP is “leakier” and it requires stricter guidelines, discipline, and code reviews to enforce. But this is not a technical issue. This is a process and governance issue. Our preference is for higher governance to offer transparency and easier debugging, over opaqueness and having proprietary extensions in our HTML. Yet through higher governance, we retain the separation between the spheres of MV* responsibility.

Full Article

Credit: Chris Solutions

jQuery 3.0 Final Release!

jQuery 3.0 is now released! This version has been in the works since October 2014. We set out to create a slimmer, faster version of jQuery (with backwards compatibility in mind). We’ve removed all of the old IE workarounds and taken advantage of some of the more modern web APIs where it made sense. It is a continuation of the 2.x branch, but with a few breaking changes that we felt were long overdue. While the 1.12 and 2.2 branches will continue to receive critical support patches for a time, they will not get any new features or major revisions. jQuery 3.0 is the future of jQuery. If you need IE6-8 support, you can continue to use the latest 1.12 release.

Despite the 3.0 version number, we anticipate that these releases shouldn’t be too much trouble when it comes to upgrading existing code. Yes, there are a few “breaking changes” that justified the major version bump, but we’re hopeful the breakage doesn’t actually affect that many people.

To assist with upgrading, we have a brand new 3.0 Upgrade Guide. And the jQuery Migrate 3.0 plugin will help you to identify compatibility issues in your code. Your feedback on the changes will help us greatly, so please try it out on your existing code and plugins!

You can get the files from the jQuery CDN, or link to them directly:

https://code.jquery.com/jquery-3.0.0.js

https://code.jquery.com/jquery-3.0.0.min.js

You can also get the release from npm:

npm install jquery@3.0.0

In addition, we’ve got the release for jQuery Migrate 3.0. We highly recommend using this to address any issues with breaking changes in jQuery 3.0. You can get those files here:

https://code.jquery.com/jquery-migrate-3.0.0.js

https://code.jquery.com/jquery-migrate-3.0.0.min.js

npm install jquery-migrate@3.0.0

For more information about upgrading your jQuery 1.x and 2.x pages to jQuery 3.0 with the help of jQuery Migrate, see the jQuery Migrate 1.4.1 blog post.

Slim build

Finally, we’ve added something new to this release. Sometimes you don’t need ajax, or you prefer to use one of the many standalone libraries that focus on ajax requests. And often it is simpler to use a combination of CSS and class manipulation for all your web animations. Along with the regular version of jQuery that includes the ajax and effects modules, we’re releasing a “slim” version that excludes these modules. All in all, it excludes ajax, effects, and currently deprecated code. The size of jQuery is very rarely a load performance concern these days, but the slim build is about 6k gzipped bytes smaller than the regular version – 23.6k vs 30k. These files are also available in the npm package and on the CDN:

https://code.jquery.com/jquery-3.0.0.slim.js

https://code.jquery.com/jquery-3.0.0.slim.min.js

This build was created with our custom build API, which allows you to exclude or include any modules you like. For more information, have a look at the jQuery README.

Compatibility with jQuery UI and jQuery Mobile
While most things will work, there are a few issues that jQuery UI and jQuery Mobile will be addressing in upcoming releases. If you find an issue, keep in mind that it may already be addressed upstream and using the jQuery Migrate 3.0 plugin should fix it. Expect releases soon. [….]

Full Article

Credit: Timmy Wilson

Application & Tools: Tools that Will Make a Web Developer’s Life Easier

Today, the world has grown so much and has been so friendly for developers, thousands of tools floating in the market for free, but we need to realize and have them in our bucket whenever required.
I will be discussing few important tools, which are handy and work out for every developer.

Tools!!

FIREBUG 2

The best tool for the developer which can be installed on FireFox (add-on). This helps monitor the structure of you HTML, CSS and also JavaScript. To add the Firebug and get it going, add now!

A web development tool which will help any developer track the issue with the client side and also track the response time for the requests in the network tab.

The tool tab looks like below:

POSTMAN

This is a very important tool, which is a restful API client, which helps developer not to waste time debugging and every time running Visual Studio to check the result set and how the API behaves. We now just need to call the url of the API from Postman and it gives us the result, even the status code as well.

This will allow developers to request any API, may it be GET, PUT, POST or DELETE.


I made a get request to an exposed github API to show how the POSTMAN reacts and shows us the Json result also the response time and the Status(200 Ok, 404 Not found, 403 Forbidden).

Yslow 8

This is such a powerful tool for any developer developing any web application, which without any fondness to anyone, ruthlessly shows how good your web app is and how good it will behave? The below image says it all:

The above web site has been given a Grade ‘D’ by Yslow. This also gives a description of why the grade is low, what can be modified or implemented to improve your grade. Here, it also lets developers know for bundling and minification for the JavaScript and the style sheets. So use it and improve the performance of your web app.[…..]

Full Article

Credit: Passion4Code

.NET Framework: Generic Repository Pattern in ASP.NET MVC

This article will guide you through creating a small application using generic repository pattern in MVC framework. This article is basically targeted for beginner to intermediate level programmer so that they could at least understand how to develop ASP.NET MVC app. After reading this article you will be in position to understand the followings:

  • Basic concept of performing select, insert, update, and delete operation with the use of MVC repository
  • How to open bootstrap model popup window and pass value to the model popup using jQuery
  • Upload the images in desired storage location from the model window and display images in the model window with the help of jQuery ajax call and generic handler

For the practical application, I am creating a simple application of Employee repository which contains basic employee information along with their documents. All documents will be stored in a folder, storage location of which is specified in the appsetting of web.config file. Following screenshot shows how the employee image upload and display window looks like:

Moreover, you could find similar articles written in similar topics around CodeProject and other tutorial sites. I would request you to refer following tutorials which are written by some expert programmers for your further reference:

  1. Implementing the Repository and Unit of Work Patterns in an ASP.NET MVC Application
  2. Generic Repository and UnitofWork patterns in MVC – By Ashish Shukla
  3. CRUD Operations Using the Repository Pattern in MVC – By Sandeep Singh Shekhawat

Now, I would like to shortly discuss about what this article makes difference from the list of articles those I mentioned above. Article links 1 and 3 that I mentioned above contains detail explanation and purely dedicated to explain Repository Pattern and Unit of Work Pattern. Article 2 is short and straight but incomplete. Although the article that I am writing does not include more theoretical explanations but it introduces the subject matter shortly and aims to provide details implementation so that anyone with little knowledge on MVC could understand the concept and start working immediately. It also summarizes all independent features discussed in above all articles in this single one. Moreover, it provides additional technique about opening model popup, and uploading and displaying images which is not available on any articles that I enlisted above…….

Full Article

Credit: Sharad Chandra Pyakurel

Design & Architecture: Framework Coupling

Are you using frameworks the right way? Are you using frameworks for business purposes or the other way around? Are your business classes dependent on frameworks?

Can I press delete button on your Spring, Guice, Hibernate, JPA dependencies and still be able to test and use your business features? If not, you might have a huge problem – high framework coupling.

Why Is It So Important?

Firstly, frameworks and libraries age much quicker than our software does. We want to upgrade them as often as possible and change them as easily as possible. In 3 years from now, the current style of writing Spring applications might be totally obsolete. Or we might want to shift to another cool framework out there. Remember all those EJB applications people wrote in the past? Most of these still exist, someone has to maintain and develop them. Worse, probably right now there are many people rewriting those in rage to Spring or Jooby. And they’re probably making the same mistake.

Secondly, and this is something even more harmful, high framework coupling leads to untestability. Each extra framework you couple to, you have to take into account when testing, making testing process more complex, often a lot longer and, in extreme cases impossible.

Microservices Don’t Change Anything

One of the arguments I heard against considering framework coupling a problem is that we write microservices. That doesn’t change anything. No matter what high-level architecture you choose, you will probably write similar amount of code to cover all the business features. What’s the difference between having 1 huge poorly designed application and having 20 small poorly designed applications? Each of these is maybe more manageable, but you have to test, release and monitor 20 instead of 1.

Then I was told that each of these applications is so small that it’s easy to rewrite it. Wrong way to go. If I have a dozen microservices, each taking 3 months to write and I want to rewrite each every 3 years, I’m getting into an endless loop of rewriting! And no, it probably won’t go faster when “just rewriting”, because in 3 years it won’t be the same team any more – they will have to learn things from scratch or from framework-coupled code.

Background

I first came across this problem in Uncle Bob’s post Screaming Architecture. I looked around my projects and saw a bunch of Spring “services”, repositories and configuration classes. They all scream Spring. Then I started thinking it out and realized it’s more than just class names. This is the thing that made me lose a lot of nerves and hours in the past – when I wanted to upgrade framework’s version, delete one completely from the system, test something in a framework-coupled application or read code written years ago in company’s internal frameworks.

Solution

The solution is easy, but uncomfortable for most developers. You have to change the way you write your applications.

Set clear boundaries for your business components. Make sure none of these boundaries are violated by framework dependencies. There is a great video about architecture and boundaries on Clean Coders:Architecture, Use Cases, and High Level Design.

Whenever you want to use some of the framework capabilities you should invert the dependency (Dependency Inversion Principle). Sometimes, it may require some extra classes, e.g., interfaces or adapters, but it’s really worth it. This also applies to JPA and other persistence mechanisms. You should never use persistence data structures like business objects – data structures and objects are different by definition!

Once you have all of the dependencies inverted, you might want to put business related code into separate physical component, i.e., separate JAR file. Physical boundaries are an ultimate way of protection against unwanted dependencies – your code won’t compile unless you pull them in.
What should seem natural at this point, you should strive to keep your test suite as framework-independent as possible. And I don’t mean frameworks like JUnit or Cucumber here. I mean that your business code should be 100% testable without using any of the frameworks your non-business components use, like Spring or Hibernate.
Another, more general rule that should be followed is: Keep your frameworks easy to use and easy to remove.Even if you probably won’t change all the frameworks in your application, it is more than sure that you will want to upgrade their version. Make it pleasure of using newer technologies, not pain of dealing with the old ones.

Full Article

Credit: Grzegorz @ tidyjava.com

All Software is Legacy

In what may be judged in years to come as a moment of madness, I have volunteered to be the primary maintainer of the Perl CGI module (CGI.pm). For the non-technical readers of this post: CGI.pm is a few thousand lines of code that in the mid to late nineties, and even some years later, was helping many websites function. Ever visited a website and seen ‘cgi-bin’ in the URL? Yep, that was probably running Perl scripts and those were almost certainly using CGI.pm

I actually volunteered to be the primary maintainer back in April 2014. The reason I’ve taken so long to write this post is that I’ve been busy, er, maintaining the module. I’ve fixed the bulk of existing issues1, written and given a talk on the plan for the module2, released an extra module to point people at better resources3, and occasionally been responding to questions about the module4, oh and of course the usual reason that it takes posts several months to get out of my drafts folder.

Despite having used the module frequently over the years, and even volunteering to be the primary maintainer, I do not like it. It was an important and useful module early on, but it has no place in a modern [perl] web stack and hasn’t deserved a place in at least a decade. This is not a criticism of the original author(s) or the original implementation, it’s simply down to the fact that the web development field has progressed and lessons have been learnt.

An important point to make is the difference between CGI and CGI.pm. CGI is the Common Gateway Interface protocol, or specification if you like, whereas CGI.pm is an implementation of that specification. CGI is still a reasonable protocol for doing web programming in some cases, whereas CGI.pm is not.5

CGI.pm wasn’t the first implementation, but it was widely adopted after being included with the Perl core:

/Users/leejo/working/CGI.pm > corelist CGI

Data for 2013-08-12
CGI was first released with perl 5.004

And when was perl 5.004 released? 15th May 1997, almost twenty years ago.

The Past

Up until that point if you wanted to do CGI programming with Perl you had to install CGI.pm manually, write your own implementation, or install scripts that did it for you. A well known example is cgi-lib.pl.6 In fact, it would probably be fair to say cgi-lib.pl was commonly used as CGI.pm included functions to make porting scripts from cgi-lib.pl easy.

Over time CGI.pm grew and grew, and grew some more, until it had implemented most (if not all) of the CGI protocol specification and beyond:https://tools.ietf.org/html/rfc3875

Take a look at that RFC and see if anything stands out. I’ll give you a clue: it’s to do with the date… Got it? Yes, RFC 3875 was finalised in October 2004, some seven years after CGI.pm was released with Perl and at least a decade after the original NCSA informal specification was released. Work on RFC 3875 didn’t start until 1997, by then there were already many different implementations of a specification that had no official formal definition.

The first official draft of the CGI specification was not released until May 1998. By then there were several large sites already running on Perl and even with CGI.pm: eBay, IMDb, cPanel, Slashdot, Craigslist, Ticketmaster, Booking.com, several payment processors, and many many others.7

Full Article

Credit: Lee Johnson

  1. “Fixed” meaning either resolved or rejected
  2. The first five minutes here and slides viewable here
  3. CGI::Alternatives
  4. Eh, there’s a few bits and pieces in various places. Perlmonks, LinkedIn, Github, etc.
  5. One Two Three Four
  6. http://cgi-lib.berkeley.edu/ – and of course Matt’s script archive.
  7. https://en.wikipedia.org/wiki/Perl#Applicationshttps://news.ycombinator.com/item?id=10590612 2

Developers: APIs are crucial to business, but tough to get right

Developers: APIs are crucial to business, but tough to get right

A survey of API developers claims security, customer satisfaction, and speed of deployment are among the biggest challenges

APIs matter, big time, and not offering an API deprives your software or service of a crucial audience. But it’s tough to get an API right because of unintegrated tooling, security issues, and the difficulty of iterating and resolving problems quickly.

These and other insights are part of the “State of API Survey Report 2016” issued this week by API testing and tooling company Smartbear. Assembled from surveys of more than 2,300 developers in 104 countries, the report looked at four major categories: technology and tools, development and delivery, quality and performance, and consumption and usage.

Mobile matters, as does security

The conventional wisdom about APIs is that they’re mainly Web and mobile powered, and that view holds up. Of those surveyed, 86 percent reported that their APIs supported Web experiences, with 64 percent supporting mobile.

But the widely ballyhooed Internet of things was much further down the list at 20 percent, after desktop (40 percent) and automation (39 percent). One possible explanation is that mobile and desktop deliver immediate and proven value, while IoT remains better in theory than in practice.

But expectations for the importance of IoT in APIs remain high, with 44.4 percent of respondents claiming IoT would be a future driver for the API industry. Nonetheless, the top slot belonged to mobile, at 54.1 percent.

The biggest challenges cited for developing APIs echo those in software development generally: security (41 percent) and easier tool integration (39 percent). The former comes as no surprise, what with insecure APIs showing up in everything from Dropbox to the Nissan Leaf.

Standardization, in third place at 25 percent, might get a boost thanks to the Swagger specification becoming the OAI (Open API Initiative) and opening up via the Linux Foundation. Interestingly, one of the key selling points of the OAI, discoverability of APIs, ranked quite low in the survey (11 percent) as a perceived challenge to API developers.

Full Article

Credit: Serdar Yegulalp

ASP.NET: Async Await with Web Forms Over Multiple Postbacks

Support for asynchronous procedures (using async / await) is great in C# and VB languages, and works very well in desktop (WinForms, WPF, console and others) applications. But there are web applications, like based on ASP.NET Web Forms, where support for asynchronous procedures are much less exposed. Microsoft itself states, that support for asynchrony (using async / await or other methods) is only limited to offload working threads, increase throughput and all asynchronous procedures must complete work before rendering phase (generating HTML).

But there are also much more useful scenarios, where asynchronous procedure is started in some postback, and completed in one of subsequent postbacks, so executing spans over multiple postacks. This allows for UI driven asynchronous processing, which is useful for most web apps. There are no examples or explanations how to do this, and some folks even state that this is impossible. Fortunately, this is possible, and here is how to do it.

Description of Method

Everything necessary to do this is to feed await with non-started task, store this task and run it in some subsequent postback, on user’s request. This way, any execution of continuations will be limited only to page processing phase and we have full control over it. But for our needs, using tasks is not the best option. It is better to create custom awaiters similar to triggers, and switch them on user’s request, to continue execute of asynchronous procedures.

Full Article

Credit: Tristan9

Microsoft plans to add containers to Windows client

Microsoft has been public about its plans to add two types of containers to Windows Server 2016. But so far, company officials haven’t talked about plans to bring container support to Windows client.

winclientcontainerbarcelona.jpg

However, adding containers to Windows 10 looks to be in the cards, as a recent Microsoft job posting makes clear.

From the job post seeking a senior program manager for in Microsoft’s Redmond operating systems engineering team:

“There are a large number of client focused scenarios, currently unannounced, where Containers form the core pivotal technology providing security, isolation and roaming ability. To deliver this, we are creating a new team with a mission to impact client computing in the same revolutionary manner we are changing the datacenter.

“The Senior Program Manager who takes this challenge will own and drive the end-to-end container scenarios across Windows client. This includes driving large cross group initiatives to deliver a complete customer-focused vision. Your stakeholders will include multiple teams within and outside Windows, spanning multiple technologies such as user experience, security, storage, and Networking.”

What would container support in Windows client mean from a security standpoint? Instead of using a virtual machine to run a browser, a user could use a Hyper-V container to isolate the browser from other apps running on the operating system. That could keep attackers from infiltrating other parts of the Windows OS via a browser attack.

Over the past several years, Microsoft Research has investigated ways to make the Windows OS more secure. The ServiceOS project — formerly known as “Gazelle” and “MashupOS” — aimed to tighten security by isolating the browser from the OS. There seems to have been little, if any, work to advance ServiceOS for the past few years, however.

There also was some browser-security work happening inside the company via a project known as XAX. XAX was a browser plug-in meant to allow users to safely run x86-native code as a browser extension, using picoprocesses, a micro-virtualization framework.

Drawbridge, a Microsoft Research project dedicated to creating a new way of using virtualization for application sandboxing, also was focused on using container technology. Drawbridge combined picoprocesses and a library OS.

The Windows Server team didn’t end up using Drawbridge as the base for its container-development work. The Windows Server and Hyper-V container technology that’s built into the current previews of Windows Server 2016 will be available in final form in the second half of 2016 when Windows Server 2016 is released.

I’m hearing Microsoft also is not planning to use any of its previous research technologies as the base of what it’s planning to do around containerization in Windows client. The Windows client container work, which one of my contacts says is codenamed “Barcelona,” has no connections to Drawbridge, XAX or ServiceOS, I’ve heard.

(Note: This isn’t the first time Microsoft has used “Barcelona” as a codename. Back in 2010, there was a Microsoft Barcelona Index Server that I had heard was in development by the SQL Server team.)

I don’t know if Microsoft is looking to make container technology available in Windows 10 during the same time frame (which would mean around the time “Redstone 2” is available). Given the way that job posting is worded, I’m thinking it could be later than that.

I also don’t know if Windows Containers would and could, one day, replace App-V, Microsoft’s application-virtualization technology, which allows apps to run in their own, self-contained virtualization environments on Windows. But it sounds like quite a few users would love to see that come to pass.

I’ve asked Microsoft for comment on its planned timetable and other information regarding its Windows client container plans. If/when I hear back, I will update this post. In the meantime, Microsoft Technical Fellow and Azure Chief Technology Officer Mark Russinovich’s August blog post about Windows and containers makes for good reading.

Credit:  for All About Microsoft