Why Drupal?

Why Choose Drupal?

There are many obvious advantages to leveraging a CMS when creating a new website or web application, in that they allow non-technical contributors manage their own content, separate content from structure and design, and help enforce standards for metadata, images, etc.  In addition, most decent CMS tools also include a significant list of features to facilitate common tasks such as search, administration, collaboration, blogging, ecommerce, workflow, email notifications and so on.

Drupal shares most, if not all, of these characteristics with many of the better CMS tools available today, but also has several characteristics which give Drupal a distinct advantage.  Many of these advantages stem from the strong Drupal community of developers, testers, documenters and other supporters.  Here are some of the advantages which keep Drupal in the lead:

  1. Cost: As an open source tool, Drupal is completely free to download, install, configure and use in its fully functional form. It’s also freely available to modify and extend to suit your needs.  Modules, themes and tools created by the Drupal community are also freely available.
  1. PHP: Although not completely unique to Drupal, the community takes full advantage of the fact that Drupal is build using PHP. When we consider the learning curve, it’s a much simpler language to learn when compared to Java, C, Python, and Ruby etc.
  1. Internationalization: Not only can Drupal be installed in several languages of your choosing, it can also be configured to serve content specific to the language of individual users. Menu links will automatically update to reflect the language being utilized.  Translations are continuously maintained by the Drupal community through the Drupal Localize project (https://localize.drupal.org/).
  1. Resource Availability: When working on a large-scale project, staff attrition can cause significant delays due to the difficulty of finding replacement staff. While finding quality staff is always a challenge, Drupal’s extensive open source community support can help to soften this blow.
  1. Performance: There are numerous tools within the Drupal core, and even more contributed by the community, for enhancing performance in many diverse ways. Historically, the one of the main performance bottlenecks was that for authenticated users, each page load performed the Drupal “bootstrap” which required loading a tremendous amount of code, forcing you to choose between personalization and performance. With the recent release of Drupal 8, with its powerful cache API, less dependency on hooks, and defining dependencies for files, Drupal can often perform even faster. 
  1. Integration: As new technologies emerge and old technologies still support legacy processes and data, the challenge of integrating them becomes acute. Drupal’s extensible, open source model can help bridge the gap. There are thousands of modules (16,000+) that were developed by community members and organizations to solve these types of problems within their organizations, and they’re freely available for you to use or modify to fit your specific case.  If you can’t find a module to meet your needs, you can develop a custom module using examples or resources available through the Drupal community.
  1. Headless Drupal: This emerging technique means using Drupal as the backend and exposing content via a REST service, which can then be consumed by any platform – independent of the language being used. This provides a couple of potential advantages: front-end and back-end development teams can work relative independent from each other and content is kept truly distinct from presentation. Drupal and its powerful tools can be used by content creators and editors without placing any limits on front end presentation on your website, mobile app, intranet sites, etc.
  1. Modification: Every tool is capable of being modified to some extent at least, to meet the needs of an application, but too often this modification impacts the scalability of the tool. The further the tool is bent to meet the applications requirements, the further is can be pulled from its core strengths, thus creating a bottleneck. This is where Drupal has perhaps its best advantage as highlighted by the wide variety of available distributions which can solve your specific business need. These distribution are very diverse in nature, for example: CRM Core (CRM), Open Atrium (Collaboration), Commerce Kickstart (ECommerce), Open Publish (Publication sites), Open Outreach (non-Profits) etc. Apart from this, Drupal StackExchange, Drupal Facebook, and Drupal Twitter, to mention just a few, are built on Drupal which demonstrates the scalable nature of this leading software platform.

Optimize Drupal under slow network speeds

Drupal with slow network

Site loading time is critical, especially when your website is running on a network that has slow or fluctuating speeds. If a user request times out or takes too long to load content, the user may lose interest and switch to another site. Having had one bad experience on your website, it is highly unlikely that the user will visit again. Slow loading sites invariably result in loss of business traffic. This is why giving a pleasant experience to your website visitor is critical.

 

According to surveys done by Akamai and other content hosts, most web users expect a site to load in 2 seconds or less, and they tend to abandon a site that isn’t loaded within 3 seconds. 79% of web shoppers who have dealt with a poorly performing web site, say that they would not return to the site; and of those, 44% would tell a friend that they had a poor experience shopping there.

 

Page loading time is determined by the size of the page which in turn depends on:

  1. The size of the text content.
  2. The number and size of external files it references(Javascript, Style sheets, images and multimedia).
  3. The user’s internet connection(bandwidth and latency).

 

Since you don’t have any control on the user’s network conditions, all optimizations will have to be performed at the server and the Drupal application level. There are no magical, out-of-the box methodologies or technologies that can be applied to improve Drupal site performance when network speeds are erratic. You have to carefully examine user expectations and optimize the performance of your website to meet those expectations.

What are some of the key website experience expectations of site users?

  1. The website should perform well even at the slowest network speed.
  2. Web pages should load in 3 seconds or less.
  3. If the user is working on a form and loses network connectivity, the information already filled out should not be lost. Rather, the user should be able to continue from where he/she left off upon returning to the site.
  4. If for some valid reason a page has to take longer than 3 seconds to load, the user should be notified and provided with an explanation.

How can user expectations be met?

By following some web design best practices and by using recommended Drupal performance optimization techniques, you can provide the user with a first-rate website experience. Drupal’s built-in caching is the easiest way to improve performance on your site but is not sufficient by itself. For large amounts of content under heavy traffic, the Drupal application doesn’t scale well. To optimize performance under these conditions, you need to apply multiple caching strategies as well as server level optimization.

 

The following is a list of optimizations at the Drupal application and the server levels:

 

Application Level Optimization

  • Save Form State/ Save form Data as Draft
    • Auto save data entered in a Drupal form without actually submitting the form. It will be used to restore the form data if internet connectiity is lost while the user is filling out the form.
    • You can also save form data as ‘draft’, which can be used to restore form data at a later point.
    • Utillize contributed modules such as “Save Form State” and “AutoSave” to save the form data.
  • Cut down on multiple HTTP requests
    • Loading a single web page involves sending multiple HTTP requests to the server for loading different elements of the page i.e. Javascript, Style sheets, images etc. Try to load fewer resources which will result in cutting down on parallel HTTP requests and thereby improve site loading time.
  • Smaller Page Sizes
    • Keep page size to a minimum to avoid slow loading web pages.
    • Avoid heavy resource consuming themes. Try to keep the theme as light as possible; themes increase overhead on page load time.
  • Show notifications for slow loading pages
    • If a page load time is unacceptable, notify the user about the cause of the delay.
    • Provide an alternative link such as a lighter version of the page.
  • Show File sizes for links/downloadable items
    • Show files size for all downloadable items, so that user is aware of the resources it will consume.
  • Defer the parsing of Javascript
    • Load Javascript files in footer scope, so that at HTML element will load first in case of slow connectivity.
  • Pages should not Auto refresh
    • Avoid auto refreshing of pages to save the bandwidth.
  • Page should have limited content
    • Use pagination to avoid displaying a long listing of content.
    • If you want to load a long listing of content on a single page, use ‘lazy’ loading to cut down the page load time.
  • Aggregate and compress CSS files
    • Use the compression utilities available with Drupal7 core.
  • Aggregate JS files/ use minified JS files
    • Use the aggregate JS functionality that is available in Drupal7 core.
  • Turn Views Caching on
    • Views caching can be enabled from the Views ‘settings’ page.
  • Enable Block level caching
    • This feature is also provided with Drupal7 core.
  • Disables modules that you are not using
    • Un-install modules like ‘Color’ which are very resource intensive and are not commonly used.
  • Move Images, Videos, Static files to CDN (Content Delivery Network)
    • You can use contributed modules that are available for CDN
  • Implement caching in your custom modules as needed
    • Try to cache output of custom modules, especially when they involve complex business computation.
  • Keep the Drupal core and contributed modules updated
    • Keeping the core and contributed modules updated helps in availing benefits of the latest performance improvements implemented by the Drupal community.
  • Enable caching for authenticated users
    • Enable caching of data for authenticated users using the ‘memcache’ module.
  • Cache panel content
    • Cache panel content with ‘panels content cache’ or ‘panels hash cache’
  • Load Images only when needed with lazy loading
  • Use performance related Drupal modules like
    • Performance and scalability checklist.
    • Performance logging and monitoring.
    • XHProf PHP profiler.
  • Rewrite queries for slow performing views queries
  • Use Image sprites to reduce loading of multiple images
  • Reduce 404 & 403 error
  • Reduce Image Size
    • Use Image styles to reduce image size on page loading.
    • Use style sheets to replace images that have no purpose other than for layout.
    • In general, due to the compression techniques used, the JPEG format is better suited to photographs and GIF; while PNG formats are better for bitmap graphics such as logos and images that contain areas of discrete colours.
  • Turn Page caching(Drupal default) on
    • It is available by default with Drupal core.

 

 

Server Level Optimization

  • Enable Browser Caching
    • Enable caching by keeping a copy of data that thas already been received, to avoid the need to request it again.
    • Use static content where possible.
  • Turn On page Compression
    • HTTP compression is a technique supported by most web browsers and web servers. When enabled on both sides, it can automatically reduce the size of text downloads (including HTML, CSS and JavaScript) by 50-90%. It is enabled by default on all modern browsers, however all web servers disable it by default, and it has to be explicitly enabled.
    • Enable gzip compression, it helps in loading the HTML elements 50% faster.
  • Enable APC(Alternative PHP Cache)
    • Enable PHP-FPM (FastCGI Implementation) instead of mod_php.
  • Use syslog
    • Send log to your hosting OS instead of writing to the database.
  • Enable reverse proxy such as Varnish/Redis to cache your static content
    • Contributed modules are available which facilitate integration of Drupal with Varnish & Redis.

 

 

You need optimization at both the Drupal application level as well as the server level to tune the performance of your website. If done right, this will provide the user with smoother functionality even over an unreliable, low bandwidth network.

Ashley Madison: When the Cheaters got Hacked

Independent of the obvious moral and ethical challenges that the recent hack of the Ashley Madison online cheating and adultery website raises, it is clear we have entered a new era of malware, viruses, worms, ransomware, trojans, phishing attacks and botnets.

Cryptolocker, a trojan virus by design, ushered in this new era of cyber bribery, extortion, corruption and ransomware. In the first 9 months of its release into “the wild”, Cryptolocker affected over 400,000 individuals whose users we’re told to pay $300 within a 3 day period after encrypting most of the data on their affected systems. If the ransom was not paid the infected user’s files would remain encrypted and inaccessible forever.

In the case of Ashley Madison, who boasts “Life is short, Have an Affair”, was compromised on July 11 by a group called the Impact Team. This event resulted in a data breach of up to 10Gb of data and a compromise of approximately 30 million user accounts. The data elements that were compromised in this breach included first and last name, street addresses, phone numbers, accounts names, hashed passwords, e-mail addresses, credit card information and in some cases GPS coordinates along with Windows domain accounts and other data related to Ashley Madison’s internal network suggesting a much broader compromise of their infrastructure. Although Ashley Madison is not disclosing technical details about this breach we can assume with a fairly high degree of certainty that multiple control failures may have occurred at their webserver, perimeter network, firewall/s, operating system/s, backend database and identity infrastructure.

It is clear, like with so many organizations, that the need to embrace and embed best practices into our networks and operating procedures is more essential than ever. Constant vigilance and adhering to industry standards like NIST 800-122 (Protecting the Confidentiality of Personally Identifiable Information (PII)), NIST 800-144 (Security and Privacy in Public Cloud Computing), the ISO 2700X series and the 12 primary control objectives of PCI DSS 3.1 are minimum standards that must be embraced today.

Unfortunately like in the case of Cryptolocker, various cyber exploitation and ransom schemes are now surfacing including cyber extortion, ransoms requiring bitcoin payments among many others.

Although Ashly Madison may want to reconsider their business model and a total revamp of their security infrastructure, what I might suggest for their end users is that they consider taking their own partners out for an intimate dinner and nice movie rather than someone else’s. It might lead to far fewer complications in their lives.

Security Breach Headliners: A Closer Look at the OPM Breach

sbh map jpg new

The first half of 2015 has been a season of information security breaches…and the biggest of all was a massive data breach at the U.S. Office of Personnel Management (OPM). OPM was impacted by two separate but related cybersecurity incidents involving data of Federal government employees, contractors, and others. In April 2015, OPM found that the personnel data of 4.2 million current and former Federal government employees had been stolen. While investigating this incident, in early June 2015, OPM identified that some additional information had also been compromised, including background investigation records of current, former, and prospective Federal employees and contractors. OPM and an interagency team from the Department of Homeland Security (DHS) and the Federal Bureau of Investigation (FBI) have been investigating these incidents, and are working to put in place changes that should prevent similar thefts in the future.

Fatigued by the enormous efforts to ensure a quick, effective and orderly response to information security incidents, organizations may sometimes lose sight of the broader and a more holistic approach of information security management. Often this narrow approach may lead to a mad rush of aggressively adding numerous tools and technologies, which not be an optimal, comprehensive and risk-based approach towards preserving the confidentiality, integrity and availability of organization information.

So what should organizations do?

Without doubt, when a breach occurs, the organization should immediately respond to take charge, try to correct what’s gone wrong, deal with the outcomes of what has already happened, conduct root cause analysis and implement additional or modified controls (preventive, detective, corrective, compensatory, deterrent), so that similar breach should not recur or occur elsewhere in the organization.

However, in the interest of a more robust and comprehensive information security approach, organizations should also consider following Risk & Compliance Lifecycle:

Harmonize – Map requirements and identify a mutually exclusive and collectively exhaustive list of controls, specified in:

  • Risk registers, vulnerabilities log, incidents log and audit reports
  • Industry standards (such as ISO, NIST, etc.)
  • Legal, statutory, regulatory and contractual obligations
  • Organization’s corporate & business unit policies, procedures, controls list

Assess – Conduct risk and compliance assessments based on harmonized requirements, using a combination of questionnaire and data analytics (correlation and prediction). Additionally, use combinations of Brainstorming, HAZOP, Structured “What-if” Technique (SWIFT), Scenario analysis, Business impact analysis (BIA), Root cause analysis (RCA), Failure modes and effects analysis (FMEA), Fault tree analysis (FTA), etc. to identify additional risks, keeping both top-down and bottom-up view.

Risk & Compliance Lifecycle

Risk & Compliance Lifecycle

Strengthen – Update & implement management systems, policies, procedures, guidelines, design documents, etc. to reflect additional or modified controls identified as part of above assessment.

Samit Khare

Samit Khare Director, GRC Advisory Services GRC Services and Solutions Samit has a senior level and a global executive capacity responsibility of directing the management of Governance, Risk management, and Compliance (GRC) Services and Solutions division of SDG’s Information Security & Risk Management practice. GRC Services and Solutions is a highly crucial and strategic division of the organization that provides Advise, Transform and Manage services to Chief Information Officers (CIO), Chief Information Security Officers (CISO), Chief Technology Officers (CTO), Chief Risk Officers (CRO), Chief Privacy Officers (CPO), Chief Compliance Officers (CCO), Chief Security Officers (CSO), and other industry leaders in similar roles. Samit has seen many sides of the industry- provided advisory & consulting services to Fortune and Global 500 customers; served in CxO positions (CRO, CISO); served as board appointed officer in a Fortune 500 organization; and managed information security, risk and compliance for business processes of multiple Fortune and Global 500 customers in global information technology, consulting and business process management companies. His past experience of providing advisory and consulting services, as well as leading and managing organizations in startup or setup stage, global expansion phase, as well as re-engineering or turnaround period, provides a great advantage in his current role of helping SDG’s customers in establishing, operating, maintaining and improving strategic management systems, processes and internal controls related to information security, cybersecurity, risk management, compliance, vendor / supplier security, privacy, insider threats, identity risks, behavior analytics, business continuity / disaster recovery, cloud risks, and mobility risks. Samit is a post graduate in business administration and holds dual graduate degrees in Computer Applications and Science. He holds multiple industry certifications including CISA, CISM, PMP, CDCP, CBCP, ITIL (F) and IAPP (F). He is also an active member of industry bodies such as ISACA, Project Management Institute (PMI), Disaster Recovery Institute International (DRII), The Institute of Internal Auditors (IIA), and International Association of Privacy Professionals (IAPP).

More Posts

Getting to the CORE of Fast App Development

TruOps Business Integration Platform Defined – 40 X’s Faster App Development

TruOps-40x-Faster-App-Development

If you are looking to build apps that give your organization a differentiated customer experience, then look no further.   In deploying technology to support and automate internal processes and to replicate functionalities online, the objective is typically to reduce costs and make the customer experience easier, more convenient and more engaging.

But if you don’t have a business integration platform, that objective is easier said than done. SDG’s Big Data Framework – TruOps CORE – enables an enterprise to exceed that stated objective and to grow quickly because SDG provides an instant technology stack.  The result is a significant reduction in development time and time-to-market.

Today, time-to-market is the be all and end all. Beating your competitor to market can provide your organization with unprecedented growth and revenue.  With TruOps Core, we’re finding that we can accelerate development by 40% and achieve a comparable cost savings over developing an enterprise application from scratch.

TruOps CORE is not just for application development.  Many organizations have multiple business systems that need to be integrated into a single business process.  To manually or separately collate and correlate data from individual systems is a huge and ongoing challenge.

Without a business integration framework, most organizations’ unification efforts die a slow death. Executives sometimes only focus on their process while other executives work at cross-purposes.

With TruOps CORE it is easy to unify processes and easier yet to provide a unified view with engaging graphics to all executives.  TruOps configurable dashboards and visualizations enable the enterprise to see and react as one, and to react in real time through a proprietary notification engine.

TruOps Core works for enterprises across the globe and across industry sectors:

  • A multi-national Fortune 10 financial services company used the TruOps Core platform to collect business metrics from functions and business units around the world to populate a globally available dashboard and reporting interface.
  • A media analytics firm uses an application built on TruOps Core to analyze TV long form and short form advertising data to make media buy/sell recommendations to their clients.
  • A garment manufacturer and retailer used TruOps Core to build an availability and performance dashboard for their business systems.