Introduction Link to heading
JavaScript emerged in 1995 as a humble browser scripting language, but over the next three decades it evolved into the world’s most ubiquitous programming language[1]. This report chronicles JavaScript’s history from 1995 to 2025, weaving together the formal, institutional developments (standards bodies, language specifications, browser engine advances, governance structures) with the informal, practice-driven evolution (frameworks, libraries, tools, coding idioms, and community norms). We will see what happened in JavaScript’s evolution and why – how technical constraints, business incentives, and community innovations co-shaped the language’s trajectory. Key themes include the interplay between de facto practices and de jure standards, the influence of engine innovations on what JavaScript could do, the expansion of the browser platform APIs, successive tooling revolutions, the rise and fall of major framework paradigms, JavaScript’s spread beyond the browser (to servers, desktop, mobile, and edge), the inner workings of the language design process (TC39), and ongoing challenges around security, internationalization, and community governance.
We organize the narrative by distinct eras (see §1995–1999 through §2023–2025), each characterized by dominant technologies and concerns. Within each era, we highlight notable standards vs. reality gaps – moments where grassroots conventions forced or accelerated formal standardization (e.g. JSON, Promises, ES modules) – and how formal specs sometimes anticipated needs ahead of practice. We will trace how browser engine innovations (from the first interpreters to modern Just-In-Time compilers and multi-tier engines) unlocked performance and new use cases, shaping what was feasible in JavaScript. As the Web platform expanded (DOM, AJAX/XHR, HTML5, WebSockets, Web Workers, WebGL, etc.), JavaScript’s capabilities grew, often driven by WHATWG and W3C specifications responding to developers’ demands. We document successive tooling waves – from simple lint tools and task runners to sophisticated bundlers and dev servers – showing how each wave redefined developer workflows and application architecture. In parallel, we periodize the major framework epochs: from early DHTML and prototype-based libraries, to jQuery’s dominance, through the MVC/MV* frameworks of the early 2010s, into the React-led era of components and reactive UIs, and finally to today’s “meta-frameworks” and fine-grained reactivity approaches. Each shift is explained in terms of the problems it solved (cross-browser DOM pain, state management, performance bottlenecks, developer experience) and the trade-offs it introduced.
Beyond the browser, we examine JavaScript’s journey on the server (Node.js and its ecosystem), the desktop (Electron), mobile (Cordova, React Native), and the edge (serverless functions and new runtimes like Deno and Bun). These environments reshaped expectations for modules, packaging, and security. The language design process itself is demystified – we explain how TC39’s proposal stages work and highlight notable features from ES2015 through ES2024 (class syntax, modules, async/await, BigInt, optional chaining, Temporal, etc.), linking each to the real-world pain points that motivated them. We also address security and reliability issues that arose at scale: from the Same-Origin Policy and Content Security Policy in browsers to supply-chain incidents in the npm ecosystem (like left-pad and event-stream) that prompted new measures (integrity hashes, package signing, permission models). Internationalization (i18n) and accessibility (a11y) get their due: we outline the growth of the Intl API for locale-aware behavior and how frameworks and tools increasingly bake in i18n/a11y best practices.
Throughout, the sociology and economics of JavaScript are considered. We note the role of community conferences (like JSConf), the formation of foundations (e.g. the evolution from the jQuery Foundation to the OpenJS Foundation), the influence of big tech (e.g. Microsoft’s stewardship of TypeScript and GitHub, Google’s frameworks, Facebook’s React and React Native), the biases in surveys like "State of JS" (often reflecting an Anglophone, early-adopter subset), and how an ethos of open-source collaboration (and occasional fatigue) shaped best practices. Economic incentives – from the ad-driven Web performance race, to the demand for developer productivity, to cloud/platform competition – provide a backdrop explaining why JavaScript took certain directions.
Finally, we explore a few counterfactual scenarios – “what if” moments where history could have diverged – to illustrate why JavaScript ended up on its particular path. These include what if a radical proposal like ES4 had been adopted in 2008, or what if Google’s high-performance V8 engine had never arrived – how might the language and ecosystem have differed?
Methods & Sources: This report draws on primary sources including official Ecma (ECMA-262) specifications and meeting notes, TC39 proposal documents and meeting records, browser engine team blogs and changelogs, Node/Deno design docs, package manager RFCs, and archived discussions on standards. We also use practitioner accounts: influential blog posts by Brendan Eich and other insiders, technical retrospectives by framework authors, postmortems of incidents, and contemporary reports from credible tech journalism. Where data is available (npm download stats, GitHub metrics, usage surveys), we include it – while noting biases (e.g. surveys over-represent certain communities). All sources are cited in endnotes, using the format【†】 linking to the reference.
We begin our journey in the mid-1990s, when the Web was young and JavaScript was born in a hurry amid the browser wars.
1995–1999: Invention, Standardization, and the Browser Wars Link to heading
Birth of JavaScript (1995): In May 1995, Netscape Communications – the dominant browser vendor of the early Web – tasked engineer Brendan Eich to create a scripting language for adding interactivity in web pages. Eich infamously developed the first version of the language in just 10 days[2], initially code-named Mocha and then LiveScript in beta releases of Netscape Navigator 2.0[3][4]. The language was designed to resemble Java in syntax (to leverage Java’s hype and familiarity) but be lightweight and dynamic like Scheme or Self in spirit[2][5]. In December 1995, after Netscape struck a marketing partnership with Sun Microsystems (makers of Java), LiveScript was officially renamed “JavaScript”[6]. (Sun subsequently trademarked the name and licensed it back to Netscape, aligning with JavaScript’s positioning as Java’s “sidekick” scripting language[6].) Despite the name, JavaScript’s semantics were quite distinct from Java – it was prototype-based (not class-based), dynamically typed, and had first-class functions and object literals influenced by Scheme and Self.
Technical Features of JavaScript 1.0: The early JavaScript (Navigator 2.0’s implementation, codenamed SpiderMonkey) was extremely limited by today’s standards. It allowed basic DOM manipulation (Document Object Model) – e.g. finding elements by document.forms or document.layers (Netscape-specific) – and form input validation. It had primitive types, objects as dynamic property maps, and the quirky with statement and == type-coercing equality. There were no formal modules, no class syntax, no Promise or async—just simple event handlers and immediate execution in page context. Debugging was rudimentary (often just alert() calls). Still, for the first time, web pages could respond to user input without a full page reload, which was revolutionary.
Microsoft JScript and the First Browser War: In late 1995, Microsoft licensed an early JavaScript implementation from Netscape’s rival (Interserver) and created its own engine called JScript to embed in Internet Explorer[7]. Microsoft released JScript with Internet Explorer 3.0 in 1996, making it generally compatible with Netscape’s JavaScript but with some differences and extensions. The browser wars between Netscape Navigator and Microsoft IE in the late 90s led to divergent, proprietary additions to the Web platform. Each browser had unique DOM APIs and event models: for example, Netscape supported the <layer> element and its own DOM, while Microsoft introduced the non-standard but useful innerHTML property (allowing quick insertion of HTML strings into the DOM) and the attachEvent model for events. JavaScript code of the era often featured many if (navigator.appName == "Netscape") ... else if (/* IE */) branches to handle incompatibilities. This period also saw the rise of DHTML (Dynamic HTML) as a buzzword – essentially using JavaScript plus the evolving DOM and CSS to create interactive, animated page components. Developers manipulated images for rollovers, created popup windows, and implemented simple form validation. But without a single standard DOM, writing cross-browser DHTML was painful.
ECMA-262 Standardization (1996–1997): To avoid JavaScript fragmenting into incompatible dialects, Netscape and Sun moved to standardize the core language. In 1996 they submitted JavaScript’s syntax and semantics to Ecma International, a standards body. The Ecma Technical Committee 39 (TC39) was formed with representatives from Netscape, Sun, Microsoft, and others[8][9]. In an early TC39 meeting (mid-1997), the members agreed that the standard couldn’t use the trademarked “JavaScript” name (and Netscape was unwilling to transfer it)[10][11]. Thus, the neutral name “ECMAScript” was adopted for the standardized language, to be defined in ECMA-262. In June 1997, the 1st Edition of ECMA-262 (ECMAScript) was approved[12] – effectively standardizing JavaScript 1.1 (as implemented in Navigator 3 and IE3) under a different name. This specification defined things like prototypes, type coercion rules, Date and Math objects, etc., providing a common reference for browser vendors[13][5].
Over the next two years, TC39 issued ECMAScript 2nd Edition (1998) – mainly editorial changes to align with an ISO standard – and ECMAScript 3rd Edition (December 1999), which added significant features to the language[8]. ES3 introduced regular expressions, the try/catch exception handling, better string manipulation, and a tighter definition of many behaviors. Notably, ES3 also formalized some of the de facto features from browsers and fixed some quirks. By 1999, all major browsers aimed for ES3 compliance. Microsoft’s IE5 (1999) implemented most of ES3, as did Netscape’s browsers. This brought a degree of language consistency across browsers, at least for the core JavaScript syntax and built-in objects.
De Facto DOM and DHTML: While the core language was being standardized as ECMAScript, the browser environment APIs (DOM, BOM) were not fully standardized in the 90s. The W3C began work on a Document Object Model specification: DOM Level 1 was issued in 1998, defining a neutral API for document structure and element manipulation. Internet Explorer and Netscape gradually implemented parts of DOM Level 1, but each retained legacy methods (e.g., IE’s document.all for element access, Netscape’s document.layers). Dynamic HTML techniques in this era were thus an “informal” practice area where developers traded tips on cross-browser tricks. For example, to position elements, Netscape supported layer.moveTo() while IE allowed setting CSS through style.left/top – leading to libraries that abstracted these differences.
By 1999, JavaScript was at a crossroads. It had an official standard (ES3) and was supported in all browsers, but browser vendors were diverging in how they exposed the environment. A backlash was brewing in some circles due to JavaScript’s perceived unreliability and the security issues of the era (like popup spam and signed scripts in Netscape). Nonetheless, the stage was set for JavaScript to advance both formally (via TC39’s continuing work) and informally (via community-driven solutions to everyday pain points).
Before moving to the 2000s, it’s worth noting an early example of practice driving standardization: one such case was JavaScript’s Date object, which in early versions was inconsistently implemented. Developers clamored for better date parsing/manipulation, and ES3 standardized a consistent Date API. Another example was the RegExp API – originally coming from Netscape’s implementation, it became so useful (e.g., for form validation) that it was standardized in ES3[8]. These foreshadow the many instances in JavaScript’s history where community experimentation preceded formal adoption.
2000–2006: AJAX Era, JSON, and Early Libraries Link to heading
Stagnation and Fragmentation Post-ES3: After 1999’s ES3, the official language standard entered a quiet period; the next edition of ECMAScript was years away. Meanwhile, the early 2000s saw Internet Explorer 6 (2001) achieve a dominant market share, effectively ending the first browser war. IE6’s dominance ironically led to stagnation in web standards implementation – Microsoft was slow to adopt new standards or fixes. Web developers were often forced to target IE6’s quirks. Netscape’s browser (whose engine became Mozilla’s SpiderMonkey in the open-source Mozilla project) continued implementing standards and new ideas, but had a smaller share until Firefox’s rise in 2004. Thus, from 2000–2004, the web environment was dominated by IE’s capabilities and limitations.
However, this period also incubated groundbreaking ideas in using JavaScript for web applications. The most important was AJAX (Asynchronous JavaScript and XML). In 2004-2005, web apps like Google Gmail and Google Maps wowed users by dynamically fetching data from the server without full page reloads. This was enabled by the little-known XMLHttpRequest object, which Microsoft had originally introduced in IE5 (1999) as an ActiveX control for Outlook Web Access. By mid-2000s, other browsers implemented XMLHttpRequest in a standard way, and the term “AJAX” was coined in early 2005 to describe this new approach to web apps. The AJAX pattern (async network calls + DOM updates) dramatically increased what could be done with JavaScript, inaugurating the era of rich web applications. It also highlighted the need for better data interchange formats and cross-browser libraries.
JSON – From Ad Hoc to Ubiquitous: One crucial informal development was the emergence of JSON (JavaScript Object Notation) as a lightweight data format. Engineers increasingly needed to shuttle data between server and client. XML was used initially (hence the “X” in AJAX), but it was verbose and required parsing. In the early 2000s, Douglas Crockford popularized JSON – a subset of JavaScript’s literal object syntax – as an alternative. In December 2002, Crockford launched JSON.org and described JSON’s format[14][15]. JSON could be parsed trivially by JavaScript’s eval() (although this was later recognized as a security hazard, leading to the development of safe parsers)[16]. JSON quickly became the de facto standard for client-server data exchange in web apps, because it was simpler and more JavaScript-friendly than XML. This is a prime example of practice leading standard: JSON was born as a grassroots convention, gained massive adoption by 2005 (e.g., used in Yahoo’s and Google’s services), and was later standardized formally as ECMA-404 (2013)[17] and integrated into ECMAScript 5 as native JSON.parse and JSON.stringify methods[18]. In other words, the community’s use of JSON forced the standard to catch up, providing a safe, built-in JSON support by 2009[18].
Early JavaScript Frameworks and Libraries: As more developers built interactive sites, sharing reusable code became important. In the early 2000s, a number of JavaScript libraries appeared, often aiming to smooth over browser differences and provide higher-level abstractions:
- Prototype.js (2005) – an influential library that extended built-in prototypes (hence the name) to add convenience methods (e.g.
Array.prototype.each). Prototype pioneered the$()function for element lookup and an easy Ajax API. It introduced the concept of a JavaScript framework for web dev, bundling polyfills for missing features. - Dojo Toolkit (2004) – a comprehensive toolkit sponsored by IBM, with modules for Ajax, UI widgets, and a build system. It tackled cross-browser issues and ahead-of-time (AOT) loading of modules (precursor to module loaders).
- MooTools (2006) – another framework building on Prototype’s concepts, with an elegant API for OOP-like class declarations and extensions.
- Yahoo! UI Library (YUI) (2006) – an early corporate open-source library by Yahoo, providing UI components and utilities with a heavy focus on consistency.
- Smaller code snippets and “copy-paste” scripts also circulated on forums (e.g., dynamic menu scripts, form validators). Communities like Dynamic Drive and early Stack Overflow precursors shared such recipes.
These libraries addressed real pains: browser incompatibilities, and the lack of higher-level constructs. For example, Prototype normalized event handling and added methods like Element.update() (which internally used IE’s innerHTML or DOM methods depending on environment) to let developers code to one API. They also introduced new idioms: Prototype and MooTools encouraged extending built-in prototypes (which later fell out of favor due to global side effects), while Dojo introduced modular development (with its own module system, before AMD).
Notably, functional programming aids began sneaking in – Prototype added Function.prototype.bind (ahead of standard) and array iteration methods that resembled what ES5 would later include. Indeed, many of ES5’s additions (like Array.prototype.forEach and Function.prototype.bind) were directly inspired by patterns in these libraries, showing how practice led standardization again.
Toward ES4 – The Big Schism: Behind the scenes, TC39 was not idle. From 2004 to 2007, there was an effort to create ECMAScript 4, a massive upgrade to the language with features like classes, modules, optional static typing, and more (heavily influenced by Mozilla’s experiments and Adobe’s ActionScript 3, which was an ES4-like implementation in Flash). However, ES4 became politically contentious – different parties had different visions. Microsoft (with the market-dominant IE) was resistant, concerned that ES4’s big changes would break web compatibility and be hard to implement[19]. The standards vs. reality split was stark: developers on the ground were more concerned with making things work in IE6 and using libraries to get things done, while TC39’s ambitious ES4 plans seemed disconnected from immediate needs. After much debate, ES4 was abandoned in 2008 – an outcome often referred to as the “Harmony” compromise. The committee agreed to focus on a smaller incremental update (which became ES5) and then later a new major update (what became ES6), dropping the more controversial ES4 proposals. This episode demonstrated how governance and economic reality (IE’s dominance) reined in the standard’s evolution. A counterfactual we’ll revisit is: What if ES4 had shipped? – it could have made JavaScript more like a statically typed, class-based language a decade earlier, but at risk of fracturing the ecosystem. In reality, the choice to abandon ES4 preserved unity and allowed incremental progress, guided in part by the patterns that were proven in practice.
Engine Developments: In the 2000–2006 era, JavaScript engine performance began to matter as web apps grew more complex. Microsoft’s JScript in IE6 was adequate for the time but had known limitations (e.g., memory leaks via cyclic DOM references). Mozilla’s SpiderMonkey (used in Firefox, which rose from Netscape’s ashes in 2004) introduced the first just-in-time (JIT) compilation for JavaScript with TraceMonkey in 2008 (just outside this period)[20]. But in our 2000–2006 window, most engines were still interpreters. A noteworthy engine is Apple’s JavaScriptCore (derived from KDE’s KJS engine) which powered Safari since 2003; it was fast and ES3-compliant but also interpreter-based until mid-2000s. Opera’s engine in this period (pre-Carakan) was also interpreter-based. So, by 2006, JavaScript was slow for heavy computations, limiting the ambitions of web apps. Yet, clever developers found ways to do more with less – e.g., progressive enhancement (where dynamic scripts only load if browser supports them) was common to ensure pages remained usable even if JS was slow or off.
The Rise of AJAX Applications: By 2005–2006, buoyed by the success of Gmail and Google Maps, the industry invested heavily in AJAX applications. Startups and enterprises alike began building richer UIs on the web (e.g., web-based email, calendars, office apps). This drove a virtuous cycle: more developers learned JavaScript, shared libraries, and pushed browser makers to improve. It also highlighted new needs in the language and platform: better debugging tools (leading to the birth of Firebug in 2006, the first powerful JS debugger), better DevTools, and patterns for structuring increasingly large JS codebases (still mostly ad-hoc at this time, but foreshadowing the MVC frameworks to come).
In summary, 2000–2006 was a period where informal innovation outpaced formal standards. JSON and AJAX were not handed down by standards bodies – they were discovered/assembled by practitioners and then embraced universally. JavaScript itself didn’t get a formal update until 2009, but it grew into new roles informally: from form validations to full-blown application logic. This era sowed seeds for both the explosion of JavaScript frameworks and the re-engagement of standards bodies to address what the community had built. It also set up the conditions for the next wave: by 2006, developers had Prototype, Dojo, and others in their toolkit, but along came a library that would quickly overshadow them all – jQuery – just as new engines and Node.js were about to change the game.
2007–2011: jQuery Hegemony, V8 and Performance Breakthroughs, Node.js and Modules Link to heading
jQuery Unifies the DOM (2006–2011): In August 2006, John Resig released jQuery 1.0, a library that dramatically simplified HTML document manipulation, event handling, and Ajax. jQuery’s slogan, “Write less, do more,” captured its appeal. It introduced a fluent, chainable API around a core function $() that could select DOM elements with CSS selectors and apply operations to them. Crucially, jQuery abstracted away the vexing cross-browser issues: methods like $.ajax(), $.css(), and .text() would work uniformly across IE, Firefox, Safari, etc., where native behavior differed. Within a few years, jQuery became near-ubiquitous on the web: by some estimates, it powered over 50% of all websites by the early 2010s. It was lightweight (relatively), well-documented, and had an ecosystem of plugins. jQuery’s dominance (2007–2012) made it effectively an informal standard library for the web – many developers learned jQuery before learning “raw” JavaScript DOM APIs. This had mixed effects: it set a high-level baseline of capabilities (e.g., “I can select any element by CSS and animate it”), but also meant many devs were initially insulated from lower-level JS (leading to the trope of “jQuery developer” vs “JavaScript developer”).
From a history perspective, jQuery’s success forced standards to adapt in subtle ways. For example, jQuery demonstrated the utility of CSS query selectors in DOM – leading browsers to implement document.querySelectorAll() (standardized in 2008 in CSSOM, widely available by 2009) to allow native CSS selection (jQuery still often outperformed early implementations until they optimized). jQuery also showed the value of feature detection (it popularized patterns to check for support rather than browser-sniffing), which became a best practice and influenced standards to focus on additive feature-checking.
ECMAScript 5 (2009): On the formal side, the abandonment of ES4 led TC39 to push out a smaller update, finished as ECMAScript 5 and approved in December 2009[21]. ES5 standardized many things the community had been doing: it added strict mode (a opt-in to catch common errors like accidental globals), built-in JSON support (JSON.parse/stringify as mentioned, to formally integrate JSON)[18], Array.prototype.forEach, map, filter etc. (inspired by Prototype.js and other libs that had filled that gap), Function.prototype.bind (inspired by Prototype’s version), and Object.defineProperty plus property descriptors (needed for advanced libraries to create non-enumerable properties, etc.). ES5 also included minor yet useful features like trailing commas in arrays, Date.now(), and provided a foundation for better library development with the introduction of accessor properties (getters/setters). The strict mode in ES5 was a nod to the need for more robust, secure JS – it made certain bad practices errors (e.g., assigning to undefined, or using with). Notably, ES5 was finished without some of the more ambitious ES4 items, but it laid groundwork for them (e.g., getters/setters foreshadowed property proxies, etc.). Microsoft’s JScript team participated actively in ES5 spec (after having been on the sidelines during ES4 arguments), and Internet Explorer 9 (2011) implemented ES5 features, as did other browsers by 2011. This meant by 2011, for the first time since 1999, there was a new baseline language that developers could rely on (assuming users had modern browsers): features like forEach etc. gradually became safe to use without shims as older IE versions phased out.
Interestingly, one immediate practice-to-spec example in ES5 was the JSON integration – JSON was so prevalent in 2008 that TC39 felt it must be included to prevent developers from continuing to use insecure eval() for parsing[16][18]. Another was the formalization of libraries’ pattern of property definitions: libraries like Dojo and Prototype had introduced ways to extend objects, and ES5’s Object.defineProperty gave them a standard, powerful tool (leading to things like polyfills for Object.create etc. using it).
The Browser Engine Renaissance (2008–2011): In September 2008, Google launched Chrome with its new V8 JavaScript engine, which was a watershed for JS performance. V8 compiled JavaScript directly to machine code on the fly (no intermediate bytecode then) and introduced advanced JIT techniques like hidden classes and inline caches to optimize property access[22][23]. At launch, V8 was 10× faster on benchmarks than the competing engines of 2008[24]. This ignited a performance arms race often called the "race for speed". Within months, Mozilla released TraceMonkey (Firefox 3.5, 2009) as the first JS JIT in a mainstream browser (trace-based JIT that specialized hot code paths)[20]. Mozilla followed up with JägerMonkey (Firefox 4, 2011) which combined tracing with method-based JIT, and then IonMonkey (Firefox 18, late 2012) a full optimizing compiler, dropping tracing entirely[25][26]. Apple’s WebKit engine (Safari) introduced SquirrelFish Extreme (“Nitro”) in 2008 with bytecode and a JIT, and later LLInt & FTL JIT tiers. Microsoft rebuilt its engine as Chakra for IE9 (2011), featuring adaptive profiling and multi-core JIT (compiling hot code on a separate thread). Opera in 2009 made Carakan JIT. By 2011, the net effect was that JavaScript execution speed across browsers improved by an order of magnitude or more compared to 2007. For developers, this meant that using JavaScript for heavier computations or more complex applications became realistic. For instance, approaches like doing client-side templating (generating HTML strings in JS) or complex animations were now feasible broadly, where previously they might have been too slow except on the fastest machines.
The engine improvements also influenced language and platform features adoption: features that might have been too slow in the past (like recursion or heavy use of closures) became more acceptable. Google’s V8 team famously optimized how closures and prototypes were handled, reducing the performance penalty. They also implemented the first on-stack replacement and generational garbage collection techniques to minimize pauses. All these made JavaScript a serious application runtime. A telling milestone: in 2010, Google demonstrated Chrome running 3D graphics and intense demos via JS (before WebGL or asm.js, they even ported a physics engine). The performance race pushed the boundaries of what kind of apps one might write in JS – no longer just form validators, but maybe even games or heavy data processing (though still limited relative to native).
Node.js Brings JavaScript to the Server (2009): Perhaps the most consequential informal development in this era was Node.js, created by Ryan Dahl and introduced in 2009. Node.js took Google’s high-performance V8 engine and embedded it in a C++ program providing a set of system APIs (file system, networking, etc.), thereby enabling JavaScript to be used for server-side scripting and network applications. Node’s programming model was event-driven, non-blocking I/O – a natural fit given JavaScript’s single-threaded callback style inherited from browsers. This was revolutionary: for the first time, JavaScript could be used to write a web server or command-line tool. Node.js quickly gained traction among startups and frontend developers who were excited to use one language across client and server. By 2011, Node had a vibrant ecosystem, thanks in part to npm (Node Package Manager), which was created in 2010 as a central registry for Node modules. npm lowered the barrier to sharing and reusing code, leading to an explosion of packages.
Node.js influenced JavaScript’s trajectory in multiple ways:
- It accelerated the need for better module systems. On the server, modularization was crucial (to organize code and manage dependencies). Node pioneered the CommonJS module format (
require()andmodule.exports), which allowed one JS file to import functionality from another in a straightforward way. This was not part of the ECMAScript standard (which had no module syntax yet). CommonJS modules became the norm in Node and influenced packaging on the client side later. The Node community’s pressure on TC39 to adopt a module system is well-documented[27]. Indeed, developers began criticizing TC39 around 2013-2014 for not having standardized modules sooner[27], given that Node’s CommonJS and the AMD system in browsers (see below) were already widely used. Eventually this pressure yielded results in ES6. - It made JavaScript a player in the server-side and tooling arena. People built web frameworks in Node (e.g. Express, 2010) and command-line tools (like linters, build tools) in Node. This meant the language was no longer just at the mercy of browser release cycles; Node could update V8 and add language features independently. It also meant that JavaScript’s community expanded to include backend developers and system programmers, whose needs (e.g. file system APIs, binary data handling) influenced new standard APIs (like Node’s Buffer, which influenced Typed Arrays in JS for binary data).
- Node’s emphasis on asynchronous, non-blocking patterns reinforced the importance of callbacks and event loops. Patterns like the “error-first callback” (e.g.
fs.readFile('file.txt', (err, data) => { ... })) became idiomatic. Over time, the pain of “callback hell” in Node would spur the invention of Promises and async/await in the language (more on that soon).
Browser Module Systems – AMD and UMD: While Node.js adopted CommonJS modules for server JS, the browser world faced a different challenge: browsers initially had no built-in module loader aside from <script> tags (which operate in global scope). As front-end apps grew, managing dependencies and load order via multiple script tags became untenable. Around 2010, developers created the Asynchronous Module Definition (AMD) format, championed by the RequireJS library (by James Burke). AMD used a function define(['dep1', 'dep2'], function(d1, d2){ ... }) to declare modules and their dependencies, and loaded them asynchronously. This fit the browser’s non-blocking needs. By 2011–2012, many major JS libraries were distributed in AMD or had AMD wrappers. Meanwhile, to allow libraries to work in both Node and the browser, a pattern called UMD (Universal Module Definition) emerged, which essentially checked for the presence of CommonJS (module.exports) or AMD (define) and defined the module accordingly. This way a single build of a library could run on Node (CommonJS) or in a browser with RequireJS (AMD), or just as a global if neither loader was present.
These multiple module systems were a headache for developers and library maintainers – a clear example of practice outpacing standards. The lack of a standard module system in ES spurred a lot of creativity (and frustration). It took until ES2015 to get a standardized module syntax, which we’ll discuss in the next section. But in 2007–2011, the groundwork for modular JS was laid by CommonJS and AMD, demonstrating both the need for modules and the challenge of reconciling different environments.
Web APIs and Platform Strides: This era also saw important additions to the Web platform that extended what JS could do:
- HTML5 and related APIs: Starting around 2008, WHATWG’s HTML5 work (finally standardized 2014, but implemented much earlier) introduced canvas 2D graphics, video/audio tags (with JS control), localStorage, Web Workers (background JS threads, first in browsers ~2009), and later WebSockets (full-duplex communication, standardized 2011). Each of these opened new possibilities: e.g., Canvas let JS do dynamic graphics (leading to games and visualizations in JS), Web Workers addressed concurrency by allowing multi-threaded JS (important for using compute power without freezing the UI).
- Same-Origin Policy and AJAX security: The use of XHR across domains prompted the development of CORS (Cross-Origin Resource Sharing) around 2008–2009, so that servers could opt-in to allow cross-site XHR. Browsers implemented CORS as a way to relax the traditional Same-Origin Policy in a controlled manner for XHR/fetch. This was an example of browser standards adapting to practical needs of web apps calling third-party APIs.
- JavaScript in other hosts: Outside of browsers, Adobe Flash’s ActionScript 3 (2006) and Microsoft’s Silverlight (2007, with a C#-like script) offered alternate visions for rich web content. But by 2011, it was clear that open web standards (JS + HTML5) were winning out. Still, some ideas cross-pollinated (ActionScript 3 influenced JS proposals; Silverlight’s demise left developers more invested in JS).
Community and Culture (2007–2011): JavaScript’s community matured significantly. The first JSConf (the JavaScript Conference) was held in 2009, signaling that JS developers now had a strong identity and community (beyond being seen as “just web designers” as earlier perceptions sometimes held). Douglas Crockford’s influential book JavaScript: The Good Parts (2008) distilled and legitimized best practices, encouraging a generation of developers to use patterns (like module revealing, avoiding with and eval, etc.) that made JS more maintainable. This was part of a broader professionalization of JS development.
“Real Programmers” and JavaScript: A notable social shift in this era was the acceptance of JavaScript as a “real” programming language. With the appearance of serious infrastructure like Node and performance parity improving, engineers from other domains began to respect (or at least acknowledge) JavaScript’s importance. That said, frustration with some JS quirks (like lack of classes or types) led to early transpiled languages: e.g., CoffeeScript (2009) introduced a syntactic sugar over JS that compiled to JS, adding an implicit class syntax, arrow functions, etc. CoffeeScript’s popularity in 2010–2012 hinted that developers wanted nicer syntax – a sign that influenced TC39 too (many CoffeeScript ideas, like => arrows, found their way into ES6).
By 2011, JavaScript had grown from a niche browser scripting tool into a full-stack, versatile ecosystem: you could write the client code, the server code (Node), and even command-line tools all in JS. The informal innovations (like jQuery, Node, AMD modules) were thriving, and the formal side (ES5 standard, emerging HTML5 standards) was catching up to legitimize these innovations. Next, we will see the explosion of client-side MVC frameworks and the landmark release of ECMAScript 2015 (ES6) – which together defined the 2012–2015 period.
2012–2015: SPA Framework Explosion, React’s Rise, npm Ecosystem Growth, and ES2015 Link to heading
By 2012, JavaScript was poised for an application-scale revolution on the client side. The term “Single-Page Application (SPA)” gained currency – denoting web apps that load a single HTML page and dynamically update it as the user interacts, often through Ajax, without full reloads. SPAs promised a more fluid, app-like experience. However, building SPAs with just jQuery (which excelled at DOM manipulation but offered no structure) became problematic as complexity grew. This drove the rise of client-side frameworks adopting MVC/MVVM patterns to impose structure on JS applications.
MVC and MV* Frameworks (2010–2013): A number of frameworks emerged to organize state, templates, and UI updates:
- Backbone.js (2010) – a minimalist library that provided models (with key-value binding and events), collections, and a rudimentary Router for managing views. It relied on the developer to choose a templating solution. Backbone, paired with underscore.js for utilities, was very popular in the early 2010s, particularly because it was lightweight and flexible. It followed an MVC-ish pattern (or MVP, depending on interpretation) and encouraged separation of data models from DOM.
- AngularJS (Angular 1.x, from Google, first released 2010) – a full-featured framework that introduced two-way data binding: the view (DOM) was live-bound to the model, so updates in the model auto-reflected in the DOM and vice versa, via a digest cycle. AngularJS also had dependency injection, directives to extend HTML, and a comprehensive solution out of the box. Circa 2012–2014, AngularJS became extremely popular for building SPAs, especially in enterprise and complex apps, due to its “batteries included” approach.
- Ember.js (2011, from the SproutCore project) – another ambitious framework focusing on convention over configuration. Ember provided a router, templates with Handlebars, and an object model. It aimed to provide a Rails-like experience on the client side. Ember was influential in pushing for URL-driven client apps (making sure the back/forward buttons and direct linking would work in SPAs via its router).
- Knockout.js (2010) – an MVVM library focused on declarative two-way bindings (using observables), somewhat a lighter alternative to Angular’s binding.
- Ext JS/Sencha (which actually predates, originally YUI-ext) – a heavy component framework often used in enterprise, with a very OO approach and its own UI components. It was less open (had GPL/commercial licensing) but shows the breadth of approaches.
These frameworks addressed the dominant problem of the era: “How do we manage state and complexity in large JS apps?”. jQuery could handle interactions fine, but as soon as you had to coordinate multiple views or maintain state in memory, jQuery alone could become spaghetti. Frameworks introduced structured patterns (like models that emit events, routers that map URLs to states, templating systems for UI) and improved Developer Experience (DX) by giving a clear project structure. The trade-off was framework complexity and learning curve. For example, AngularJS’s two-way binding could lead to performance issues (lots of watchers triggering digests) and debugging issues (if you didn’t understand the digest cycle). Ember’s conventions required adherence but gave magic in return.
By 2013, it was clear the community was split across these frameworks. Debates raged about two-way binding vs. one-way, string templates vs. HTML-based templates, etc. It was a fertile ground for new ideas—and indeed, in May 2013, at a ReactJS conference, a newcomer idea was presented by Facebook that would soon reshape front-end architecture.
React and the Declared UI paradigm (2013): Facebook open-sourced React in 2013, offering a radical shift: UI should be described as a function of state, using a declarative component model, and the framework would efficiently update the DOM via a Virtual DOM diffing algorithm[28][29]. Instead of two-way binding, React emphasized a one-way data flow (unidirectional) – parent components pass data down to children, and events bubble up to inform state changes, making the app’s state management more predictable. Initially, some in the community were skeptical (especially because React introduced JSX, an XML-like syntax for UI inside JS, which broke from the prevailing separation-of-HTML-and-JS mindset). But React had clear advantages: by treating the UI as a pure projection of state, and using virtual DOM diffing, it provided both developer simplicity (no need to manually manipulate DOM, no need to track when to update what – just re-render and let React reconcile) and often performance gains for complex UIs (since it batched and minimized actual DOM operations). By 2014–2015, React was gaining immense popularity, especially after React was used successfully in Facebook’s own products and others adopted it for its composability.
React’s influence went beyond its usage: it heralded the era of components as the unit of UI reuse (versus reusing templates or full MVC structures). It also indirectly led to flux architectures (like Redux in 2015) to manage global state with clear flows. We will examine the React vs AngularJS transition in detail in the comparative case study (see Comparative Case Study: AngularJS to React), but historically, by 2015 React had sparked a shift in best practices toward declarative rendering and functional programming concepts in UI (e.g., immutability for state, pure functions for rendering).
ES2015 (ES6) – A Landmark Standard Update: Meanwhile, the formal side caught up in a big way. ECMAScript 6th Edition, rebranded ECMAScript 2015, was finalized in June 2015[30]. This was the most significant language update ever, incorporating many years of proposals. Key features included:
- Classes: Finally, a class syntax for JavaScript (essentially sugar over prototypal inheritance). This addressed the developer desire for more familiar OOP structures and made patterns from frameworks (like Angular’s controllers or Ember’s classes) easier to implement natively.
- Modules (import/export): At last, a native module system. ES6 modules provided static, lexical imports and exports, enabling better static analysis and dead-code elimination (tree shaking). This was influenced heavily by Node’s CommonJS and AMD but designed to be statically analyzable (unlike CommonJS which is dynamic). The presence of a standard module definition promised to unify how JS code is packaged, though the transition cost would be high (we will see in 2016–2019 how Node and bundlers adapted).
- Arrow Functions (=>): A shorter syntax for functions that also lexically binds
this(solving the classicthisbinding confusion in JS). Arrows were influenced by CoffeeScript and widely welcomed – they made functional patterns (like mapping arrays) concise and eliminated the need forself = thisworkarounds. - Promises: The native Promise was standardized in ES2015[17]. This was huge for asynchronous code. Promises had been introduced in libraries (like Q, jQuery’s
$.Deferred, etc.) and in the DOM spec (e.g., some DOM futures). TC39 intentionally hurried the inclusion of Promises to prevent the HTML/WHATWG spec from defining its own incompatible version[17]. Promises formalized a common pattern for async code, making it easier to avoid callback hell by chaining and composing async operations. - Generators and Iterators: The
function*generator syntax andyieldwere introduced, along with the iterable protocol (and built-ins likefor..ofloop,Map,Setdata structures, etc.). Generators (inspired by Python, and earlier implemented in Firefox) unlocked new possibilities – they were even used to polyfill async workflows via libraries like co. - Template Literals: Multi-line, embedded expressions with backtick `` strings. This made string building (especially HTML strings, or logging) much nicer, reducing the need for ugly concatenation or manual newlines.
- Destructuring: Convenient syntax to unpack arrays or objects. This was influenced by Python/Ruby, and by existing patterns using libraries (like underscore’s
_.pluckvs using destructuring to pick properties). - Default + Rest + Spread: ES2015 added default parameters, the rest
...parameter to gather arguments, and spread...to expand arrays or objects in places. These were common idioms in other languages and made JS less verbose. For instance, usingMath.max(...array)instead ofMath.max.apply(Math, array). letandconst: Block-scoped variable declarations, addressing the biggest “gotcha” of JS (varhoisting and function scoping).let/constbrought manageable scoping rules, closer to other languages, andconstsignaled intent of not reassigning. These were perhaps the most immediately impactful changes for day-to-day code.- Typed Arrays: Actually introduced earlier for WebGL, but standardized (as ArrayBuffer, Uint8Array, etc.) which allow handling binary data efficiently – crucial for things like image processing, networking, and crypto in JS.
- The “... and much more”: Symbol type,
Proxy(meta-programming by intercepting operations on objects),ReflectAPI,Object.assign, improvedDateparsing, etc. It was a huge spec.
ES2015 was so large that the committee decided to switch to annual release cadence thereafter to ship smaller batches more regularly[31]. Implementing ES6 took time – by end of 2015, most engines supported a majority of features (Firefox and Chrome led, IE11 supported some but Microsoft had already moved to Edge with a new engine). Transpiler tools filled the gap in the interim, which we discuss next.
Babel and the Era of Transpilation (2014–2015): To let developers use ES6 features before all browsers supported them, projects like Traceur (by Google) and 6to5 emerged. 6to5, created by Sebastian McKenzie in 2014, was soon renamed Babel and became the de facto transpiler. Babel could convert ES6+ code into ES5 that could run on older browsers. By 2015, it was mainstream in the React community (since React’s JSX also needed transformation). Babel’s popularity had a profound effect: it decoupled developers from the pace of browser adoption. One could use the latest JS features and rely on Babel (and polyfills) to handle compatibility. This influenced TC39’s process too – Babel could implement Stage 2+ proposals as plugins, gathering feedback even before official standardization. Essentially, transpilation became a normal part of JS development pipeline in this era, heralding an important shift: JavaScript was no longer always directly executed as-is; it often went through a build step. This was a controversial idea at first (some lamented losing the view-source simplicity of the web), but it enabled rapid innovation (e.g., using ES6 modules and arrow functions in 2015 even if you had to support IE10).
TypeScript’s Ascent (2012–2015): Microsoft introduced TypeScript in 2012 – a superset of JavaScript adding optional static types, classes, and interfaces, which compiles down to JS. Early on, TypeScript was met with some skepticism (Google’s AngularJS team initially had their own project AtScript, but by 2015 they chose TypeScript for Angular 2’s development). Around 2014–2015, TypeScript’s value became clear for large projects: by catching errors early and enabling IDEs with better intellisense, it improved reliability. The Angular 2 (announced in 2014, released 2016) adoption of TypeScript was a turning point – it signaled that even a major Google framework saw the merit in Microsoft’s TS. By 2015, TypeScript was at version ~1.5 with many ES6 features and excellent tooling, starting to gain significant community traction (though its explosive growth came in later years, as we’ll see). Meanwhile, Facebook’s Flow (2014) provided a similar static type checking for JS, but with a different philosophy (gradual types without transpiling everything, using comments or annotations). Both tools indicated a strong desire in the community for static typing, large-scale maintainability, and richer IDE support. According to later surveys, by mid-2010s a growing minority were adopting TypeScript, and documentation like DefinitelyTyped (for type definitions of JS libraries) emerged. We will see in the 2016–2019 section how TypeScript becomes even more mainstream. For now, note that TS and Babel together meant the “build step” for JavaScript was increasingly standard.
npm’s Growth and the Frontend Packaging Revolution: The npm registry, though created for Node, by 2015 had become the hub for all JavaScript packages, including front-end ones. Tools like Browserify (2011) and webpack (2012–2014) allowed using npm CommonJS packages in the browser by bundling them. This shifted the front-end ecosystem from having separate “plugin repositories” or using Bower (a short-lived front-end package manager introduced in 2012) to just using npm for everything. The number of packages on npm skyrocketed in this period. By June 2019, npm had over 1 million packages[32], a 250% growth in just 2.5 years from 2017[33] (and continuing beyond). In 2015 specifically, the rate of new packages was enormous. This created some chaos (“dependency hell” and the famous “left-pad incident” in 2016, discussed later), but also a rich innovation landscape.
Build Tools Evolution: Task runners like Grunt (2012) and Gulp (2013) became widely used to automate workflows – minifying JS/CSS, optimizing images, etc. Grunt was configuration-centric; Gulp introduced streaming and code-centric build definitions. By 2014, many projects had a Gruntfile.js or gulpfile.js. Then came webpack (first release around 2013, popularized by 2015 especially in React and Angular communities). Webpack was a module bundler: it treated every asset as a module (leveraging the new ES2015 module standard or CommonJS) and could bundle not only JS but CSS, images via loaders. Webpack’s ability to do code splitting (lazy-loading parts of the app) and Hot Module Replacement (swap modules in running app during development) made it incredibly powerful. By the end of 2015, webpack was becoming the dominant tool for modern JS apps (with Browserify still used in simpler cases). We will cover the further evolution of bundlers (webpack 2, Rollup, etc.) in the next era.
Web Platform in 2012–2015: This period saw maturation of many HTML5 APIs: - WebSockets (as mentioned, standardized 2011, widely usable by 2012) allowed realtime apps (e.g., chat, live updates) via JS. - Web Workers were widely available, enabling background threads for data crunching or using libraries (like image processing) off the main thread. - Service Workers were conceptualized (spec first draft in 2014) – an evolution of the earlier AppCache spec – enabling intercepting network requests and caching, critical for offline apps and Progressive Web Apps (PWAs). The first Service Worker implementations hit Chrome in 2015. This in essence allowed JS to act as a network proxy for its site, which was a huge step for building resilient apps (we’ll discuss PWAs and service worker security later). - WebGL (Web Graphics Library) was introduced in browsers around 2011 (based on OpenGL ES 2.0) giving JS the power to do 3D graphics with GPU acceleration. By 2015, WebGL was standard in all major browsers, and libraries like three.js made it accessible. This opened the door to games and visualization in the browser using JS. - Mobile Web & Responsive: With the smartphone explosion (iPhone in 2007, Android late 2000s), by 2014 the majority of web traffic was mobile. Web developers had to ensure their JS-heavy sites performed on weaker devices. This led to emphasis on performance and practices like responsive design (which is CSS-driven but JS often used to enhance mobile UX). Also frameworks like Zepto (a lighter jQuery for mobile) appeared, though eventually mobile hardware caught up and mainstream libs sufficed.
Language Proposals and TC39 Process Changes: After ES2015, TC39 moved to a yearly cadence. In 2015 they also formalized the staged proposal process[27]: Stage 0 (“strawman”), Stage 1 (idea), Stage 2 (draft spec), Stage 3 (candidate, need implementations), Stage 4 (finished, ready to merge into the spec). This process made the development of features more transparent and allowed community feedback via the now open TC39 GitHub. Some proposals in flight by 2015 included: async/await (in Stage 2 by 2015, championed by Microsoft’s Brian Terlson, inspired by C# and what had been done with generators + promises), Object.observe (which was an experiment to provide data binding observation natively, but ultimately withdrawn in favor of proxies and frameworks’ approaches), and decorators (proposed way to annotate classes, influenced by AtScript/TypeScript, though decorators would spend many years in proposal status). The TC39 shift to openness was partly a response to prior frustrations (like Node’s involvement, etc.) and it was largely successful – by engaging more with developers, the language evolved in tune with real needs.
Summarizing 2012–2015: This era was marked by explosive growth in the ecosystem and a harmonization of standards catching up to what developers wanted: - Frameworks solved immediate app-building problems (with trade-offs of complexity). - ES2015 gave developers a modernized language (resolving many “pain points” like verbose function syntax, lack of modules, clunky inheritance patterns, etc.) and reflected many patterns pioneered in userland. - The build toolchain (npm, Babel, bundlers) became a standard part of web development, heralding what some called “JavaScript fatigue” around 2015 – due to the many moving parts and rapid change. - JavaScript truly became polyglot in usage: front-end, back-end, desktop (via emerging Electron, 2013), and more.
Next, we move to 2016–2019, where the trends continue: incremental improvements in the language (ES2016+), further tooling sophistication, consolidation around a few dominant frameworks (notably React and Angular’s next generation), and an increasing focus on performance and scaling (TypeScript’s dominance, server-side rendering comeback, etc.).
2016–2019: Consolidation, TypeScript Adoption, Modern Tooling, and Renewed Focus on Performance Link to heading
The period from 2016 to 2019 in JavaScript’s history is characterized by consolidation and maturation. Many of the revolutionary ideas of the early 2010s became established best practices by the late 2010s. The community also grappled with the complexity that had accumulated, leading to efforts to streamline and improve developer experience and application performance. Meanwhile, the language continued to evolve yearly, adding useful (if smaller) features that often codified patterns developers were already using via libraries or transpilers.
Post-ES6 JavaScript (ES2016–ES2019): After the landmark ES2015 release, TC39 shifted to annual releases with fewer features each:
- ES2016 (ES7) was modest: it added the exponentiation operator
**andArray.prototype.includesas its main features. These came directly from common needs: exponent operator was long overdue (instead of usingMath.pow), and.includeswas a more semantic way to check array membership (developers had used libraries orindexOf !== -1hacks previously). The quick inclusion of.includesshows the benefit of the new process: it addressed a minor but widespread pain (checking existence of a value in an array) swiftly[34]. - ES2017 (ES8) was more substantial, chiefly because it introduced async/await syntax. This fulfilled the promise (no pun intended) of making asynchronous code as readable as synchronous code. Instead of chaining
then()calls or deeply nesting callbacks, developers could writeasync functionand useawaitinside it to pause until a Promise resolves. This built on top of Promises (which by now were well-supported) and was influenced by similar constructs in C# and Python. Async/await was a game changer for Node.js code and front-end code alike – making complex asynchronous flows (like fetching data then reading files then updating UI) linear in appearance. It quickly became one of the most beloved features of modern JS. Notably, within months of ES2017’s approval, transpilers and Node (v8 in 2017) supported async/await, accelerating adoption. Also in ES2017, Object.values/entries (for getting object properties easily) and SharedArrayBuffer/Atomics (for low-level threading operations, though temporarily disabled in browsers in 2018 due to Spectre security concerns) were added. - ES2018 brought features like rest/spread properties for objects (extending the ... syntax that was already on arrays), Asynchronous Iteration (
for await...ofloops to iterate over streams of data asynchronously), Promise.finally (a convenient method to add a cleanup step to promise chains), and Regex improvements (dotAll flag, lookbehind assertions) to modernize regular expressions. These features continued the trend of codifying what had proven useful: e.g., object rest/spread was something developers already enjoyed via Babel (from TC39 proposal) since it made merging and cloning objects easier (no moreObject.assignin many cases). - ES2019 added things like optional
catchbinding (so you could writecatch { ... }without specifying an unused error variable), Array.prototype.flat/flatMap for flattening arrays, Object.fromEntries (the inverse of Object.entries, useful for transforming key/value pairs back to an object), and String.trimStart/trimEnd. Again, relatively small quality-of-life improvements. Notably, ES2019 also officially added JSON superset (allowing U+2028 line separator in strings) and clarified some spec details.
Throughout these, the language remained backward compatible – a core philosophy of TC39 is “don’t break the web.” This meant all new features had to coexist with old ones (even if that meant some awkwardness, like await being reserved inside async functions only, etc.). In design discussions, the committee often favored smaller, additive changes rather than sweeping ones (a lesson from the ES4 saga). Each feature typically related to a pattern widely recognized in the community. For example, the nullish coalescing (??) and optional chaining (?.) operators (which reached Stage 4 a bit after 2019, but were being designed in this period) clearly responded to the common pain of dealing with null/undefined without verbose checks – a problem so common that libraries (and TS’s non-null assertion) provided solutions. We see TC39 essentially listening to the ecosystem.
TypeScript Becomes Mainstream: By 2016–2019, TypeScript’s adoption soared. Several factors contributed: Angular 2 (2016) was written in TS and recommended it, Microsoft continued to invest heavily (great VS Code integration, more features like async/await support even before ES2017 was official), and the community’s projects grew larger, making static typing more valuable. State of JS surveys around 2018–2019 showed an enormous jump in TS usage and satisfaction – e.g., in 2019 the survey indicated ~38% of respondents used TypeScript and it had ~93% satisfaction[35]. By 2024, over 80% were using TS for at least half their code[28]. Major libraries like React embraced TS for their own codebase (React’s Flow use waned and they provided TS type defs). Node.js got better TS type definitions for its APIs. DefinitelyTyped (the community repository of TS type definitions for pure JS packages) grew to cover tens of thousands of packages, mitigating one adoption barrier. TypeScript in this era also kept up with JS, supporting ES2015+ features and even driving some proposals (like decorators, which Angular leveraged under an experimental TS decorator feature). The trade-off with TS is the build step and learning curve of a type system, but the benefits in catching errors early and improved IDE experience proved substantial, especially for large projects. By 2019, one could argue TS had become a de facto part of the JavaScript ecosystem for many – not a separate language but an extension. Microsoft’s stewardship and making TS open source and free helped alleviate community skepticism. Meanwhile, Flow, which peaked around 2016, declined as Facebook didn’t push it as actively externally and TS gained mindshare.
The Framework Wars Settled (Mostly): The late 2010s saw the framework landscape narrow to a few dominant players: - Angular underwent a complete rewrite. Angular 2 (rebranded just “Angular”, distinct from “AngularJS”) was released in 2016. It was a new framework (not backward compatible) built in TypeScript, using a component-based architecture (inspired partly by React’s success), and a unidirectional data flow with explicit change detection (though still using Zone.js to simulate automatic detection). Angular (v2+ up to v9 by 2019) became a top choice for enterprise-scale apps, though its popularity in the wider OSS community was somewhat eclipsed by React. - React firmly established itself as the most popular front-end library by the late 2010s. It benefited from an ecosystem of state management (Redux (2015) became almost synonymous with React for a while, MobX (2016) offered a more OOP reactive alternative, and later Context API improvements in React itself), and a huge community. The introduction of React Hooks in 2019 (allowing state and side-effects in functional components without classes) was a significant evolution that re-emphasized functional programming and eliminated many class component patterns. React also saw increasing use of server-side rendering (SSR) for performance and SEO, supported by frameworks like Next.js (see below). - Vue.js emerged as a major player. Created by Evan You (first released 2014), Vue took a middle path between Angular’s templating and React’s reactivity, with a gentle learning curve. Vue 2 (2016) gained widespread adoption, especially in Asia and among those who wanted a simpler, more integrated approach than React’s pure-library (plus lots of choices) model. Vue offered two-way binding for form inputs but one-way data flow otherwise, and used a templating syntax with directives (like Angular) but was lightweight and incrementally adoptable (you could drop it into a page like jQuery). By 2019, Vue was often mentioned right after React in surveys, with high satisfaction. - Emerging & Niche frameworks: Svelte, introduced by Rich Harris (first version 2016, Svelte 3 in 2019), flipped the script by being a compile-time framework – your Svelte code (components with HTML, CSS, and JS logic) is compiled to efficient imperative JS that manipulates the DOM, with no virtual DOM overhead. Svelte gained attention for its performance and tiny bundle sizes (no big runtime), and its philosophy of running at build time rather than in the browser. It remained smaller in market share but very influential (the idea of “disappearing framework”). Polymer (Google’s library for Web Components, 2015) and later LitElement/LitHTML (2017+) tried to push usage of native Web Components (Custom Elements and Shadow DOM). Web Components as a standard matured (v1 specs in 2016) and by 2019 were supported in all modern browsers, but they did not overtake frameworks as initially hoped; rather, they found niche use in design systems and micro-frontend integration. Preact (a 3KB React alternative) and Inferno (a high-perf React-like lib) showed continued interest in smaller, faster implementations.
By 2019, React was the undisputed leader in mindshare for new front-end projects (especially in Western open-source circles), with Angular and Vue also extremely significant worldwide. The “framework wars” of the early 2010s settled into a sort of balance: each major framework had its community and preferred use-cases. Ember continued but its popularity waned outside its loyal base. Backbone was largely obsolete, as were earlier AngularJS versions (AngularJS 1.x reached EOL in 2021, with most projects migrating or rewriting to Angular or others). The community, while still opinionated, had matured past tribal wars and more into pragmatic comparisons and even combinations (e.g., using React for some parts, Vue for others, etc., though not common, was feasible).
Rise of “Meta-Frameworks”: In tandem, there was growing interest in server-side rendering (SSR) and static site generation (SSG) again, to optimize for performance and SEO. History repeated somewhat: early web pages were all server-rendered; SPAs sacrificed that for richness, and by late 2010s, it was clear that initial load performance and SEO were suffering in SPA-only approaches. So frameworks began offering SSR modes: - Next.js (2016) by Vercel (Zeit) – built on React, it provides an opinionated structure (file-based routing, automatic code splitting, SSR by default for each page). Next gained enormous popularity as it solved many production concerns (rendering on server to serve fast initial HTML, then hydrating on client; also supporting static export, and API routes). Next.js can be seen as a meta-framework, combining React with a build/run infrastructure to handle routing, SSR, etc., relieving developers from writing boilerplate for these tasks. - Nuxt.js (2016) – similarly for Vue, offering SSR and static generation capabilities. - Gatsby (2017) – a React-based SSG (Static Site Generator), which pre-renders pages at build time and uses React for client interactivity. It became popular for content-heavy sites (like blogs, marketing sites) needing speed and SEO. - Angular Universal – Angular’s SSR solution (initially a separate project, later integrated). - Create React App (CRA) (2016) – while not SSR, this tool (from Facebook) standardized a zero-config build setup for React SPAs, using Webpack+Babel under the hood. It drastically lowered the barrier to starting a React project and became widely used. Similar CLI tools existed for Angular (Angular CLI) and Vue (Vue CLI). - New paradigms: Some frameworks explored islands architecture (e.g., Marko by eBay, later Astro in 2021) where only parts of a page are hydrated as interactive, and resumability (e.g., Qwik in 2019/2020s) which tries to avoid rehydration cost entirely by resuming app state on client. These were responses to SPA performance issues, though they matured more in the early 2020s.
The trend here is that developers wanted the developer experience of SPAs and component frameworks, and the performance/SEO benefits of server-rendered or static pages. Meta-frameworks like Next.js essentially moved more work to the server build or runtime to deliver faster initial loads to users. They also embraced Edge computing (running SSR on CDN edges for global low-latency). By 2019, full-stack JS frameworks like Next were considered state-of-the-art for web applications.
Tooling: Build Tools Get Faster and Better – The mid/late 2010s saw Webpack solidify as the dominant bundler (v2 released 2017 with tree-shaking for ES6 modules, v4 in 2018 with zero-config mode for simple cases). Rollup (2016) established itself as a bundler optimized for libraries – its ESM-focused design produced smaller output and it pioneered “tree-shaking” (removing unused exports) which Webpack then adopted. Parcel (2017) came out as a zero-config bundler that just works out-of-the-box, gaining fans for simplicity. There was also an increased usage of npm scripts (using package.json “scripts” section) for simple build tasks, sometimes replacing the need for Gulp/Grunt by directly calling tools like Webpack or ESLint. Speaking of which:
Linters/Formatters: ESLint (2013) became the universal linter by 2016, replacing JSHint. It provided extensible rules and was essential in large projects to enforce style and catch bugs (like no unused vars). By 2019, nearly every project integrated ESLint. Also, Prettier (2017) emerged as an opinionated code formatter to automatically format code – its “no bikeshedding” approach to style was quickly embraced, drastically reducing discussions about formatting in PRs.
Testing: The testing ecosystem in JS matured: Jest (from Facebook, 2014) became a dominant testing framework by 2017+, offering an all-in-one solution with zero config, powerful mocking, and an interactive watch mode. It largely displaced older setups with Mocha + Chai or Jasmine for many. Cypress (2017) provided a much improved developer experience for integration/functional testing (in-browser, with time-travel debugging). These tools indicated how serious the community was about software engineering practices, matching what was standard in other language ecosystems.
Package Management: npm as a client had rough spots (performance, and a infamous UX fiasco with
npm left-padetc.). In 2016, Yarn (by Facebook) was launched as an alternative npm client, offering faster installs and a lockfile for deterministic installs. Yarn was widely adopted especially in large projects. npm responded with improvements (npm5 in 2017 introduced its own package-lock.json). By 2018, npm vs Yarn was a matter of preference; both coexisted. Later, pnpm (2017) introduced a novel approach with a content-addressed store and symlinks to reduce disk space and speed up installs, gaining traction in the late 2010s. Essentially, JavaScript package management became more sophisticated and reliable; lockfiles became standard to avoid “dependency hell” and ensure reproducible builds.
Security Incidents and Mitigations (mid/late 2010s): The burgeoning npm ecosystem also experienced some high-profile security problems that led to new practices: - The left-pad incident (March 2016)[36][37]: A developer unpublished a tiny utility package “left-pad” (11 lines of code that pad a string on the left) from npm as protest in a trademark dispute. This package was a transitive dependency in many projects (including major ones like Babel), causing builds worldwide to break. This highlighted the fragility of the ecosystem’s reliance on tiny packages and npm’s policy (at the time) of allowing unrestricted unpublishing. In response, npm changed rules (e.g., package unpublish is restricted if a package is widely depended upon, and naming disputes are handled more cautiously)[38][36]. It also led many to question the practice of importing trivial functions from npm instead of writing them – but the inertia of the ecosystem continued to favor reuse, tempered by more awareness. - The event-stream compromise (Nov 2018)[39][40]: A widely used npm package event-stream had an inactive maintainer who handed control to a malicious actor. The new maintainer released a version with a dependency flatmap-stream containing obfuscated malware targeting a specific app (a Bitcoin wallet)[41][40]. This went unnoticed until users caught strange behavior. It was a wake-up call about supply-chain security: even trusted packages could become attack vectors. The incident spurred npm and the community to consider measures like two-factor auth for publishers, tighter scrutiny of dependencies, and tools to audit packages (npm added npm audit to scan for known vulnerabilities). - Prototype pollution and other vulns: Many security research emerged around how common JS patterns could be exploited. For example, merging objects naïvely could lead to prototype pollution (modifying Object.prototype via an {__proto__: …} payload). Libraries adjusted to guard against this, and security tooling like Snyk became commonly used to detect vulnerabilities in dependencies.
To mitigate such risks, initiatives like lockfiles (which ensure you use the exact version you audited) became more critical. npm itself acquired npm Audit (leveraging the Node Security Platform database it had acquired) to warn developers of known vulnerable dependencies. Subresource Integrity (SRI) was introduced for CDNs (allowing <script> tags to include a hash to ensure the fetched script hasn’t been tampered with). The ecosystem also started discussing package signing, though a robust signing infrastructure wasn’t fully in place by 2019.
Node.js Maturity and New Runtimes: Node continued to evolve in this period: - After the io.js fork (2014) and reunion (2015) under the Node.js Foundation, Node released version 4, 6, 8, etc., on a steady schedule with LTS (Long Term Support) releases, which made it stable for enterprises. By Node 8 (2017), V8 engine updates brought native async/await support and performance improvements. - Node.js v10 (2018) and v12 (2019) finally delivered ESM (ES Modules) support as an experimental, then stable feature. This was a long journey: the design of how Node would handle ESM vs CommonJS was contentious (using .mjs extension vs package.json "type": "module" field, etc.). By 2019, Node 12 allowed modules using either a .mjs extension or by setting a flag or package type. This meant that at last, front-end and back-end could share the same module syntax (import/export) without transpilers. In practice, adoption was slow because the ecosystem of CommonJS was huge and many packages were not yet dual-format. But it was a crucial step toward unification. Modern bundlers (Webpack, Rollup) also fully supported ESM, and web browsers since 2017 had <script type="module">. So by 2019, ES Modules were truly viable end-to-end, though CommonJS remained prevalent in Node land for compatibility. - Node Performance & Features: Node integrated new V8 features like Ignition/TurboFan (a new interpreter + optimizing compiler by 2017 replacing old pipeline[42][43]) which boosted performance and memory efficiency. Node also added convenient APIs like util.promisify to bridge callback APIs to promises, and Workers (Node.js Worker Threads in v12) to allow multi-threaded JS in Node for heavy CPU tasks (an alternative to the multi-process model). The V8 team’s continuing JIT improvements (like TurboFan’s “sea of nodes” optimizer[43]) meant Node JS could handle more intensive workloads, narrowing the gap with traditionally faster languages for certain tasks. - Concurrency Models Expanding: Besides Node’s event loop, there was interest in alternative concurrency models, e.g., WebAssembly threads (for browsers, which came via SharedArrayBuffer + Atomics), but that’s more 2020s. In Node, one notable addition was support for async_hooks (low-level API to track async context, used for debugging or performance monitoring).
- Alternate Runtimes: Late 2010s also saw new JavaScript runtime ideas:
- Deno: Announced by Ryan Dahl (Node’s creator) in 2018, Deno aimed to fix Node’s design regrets (security model, module system, etc.). Deno uses ES Modules, TypeScript out-of-the-box, and a sandboxed permission system for file/net access by default (inspired by browsers). It was still in development (v1.0 came mid-2020) but stirred excitement as a “modern Node”.
- Electron (2013 and rising): Using Node + Chromium to create desktop apps with web tech (made by GitHub). By 2016–2019, Electron was behind apps like VSCode, Slack, Discord. It showed JS’s reach to desktop, though criticized for heavy resource use. It emphasized the need for security (embedding Chromium means you have a full browser stack to keep updated).
- React Native (2015) matured in this period for mobile apps (JS controlling native widgets). Many companies adopted it, extending JS’s domain to iOS/Android apps. Its existence also influenced web libraries (the concept of “learn once, write anywhere” – React DOM vs React Native share logic).
- Serverless functions (like AWS Lambda, Azure Functions, Cloudflare Workers) took off. Node was the primary runtime for many serverless providers (starting with AWS Lambda’s Node support in 2015). Cloudflare Workers (2017) interestingly used a different approach – running JS in V8 isolates (similar to Chrome’s model) for extremely fast startup and at edge locations, with a subset of APIs. This model influenced later worker-like runtimes (like Deno Deploy, Fastly’s Compute@Edge with JS/Wasm).
Internationalization and Accessibility Progress: - Intl API: After the initial ECMA-402 internationalization API (2012) which gave Intl.Collator, Intl.NumberFormat, Intl.DateTimeFormat, subsequent iterations (2015 second edition, etc.) expanded features. By late 2010s, new Intl features like Intl.RelativeTimeFormat (for “5 minutes ago” style phrases), Intl.PluralRules (to help with plurals in translations) were added (PluralRules in ES2018, for example). Intl.Segmenter and others were in the works. This built-in i18n support reduced the need for large moment.js or globalize.js libraries for common formatting. - Unicode: JavaScript finally fully embraced Unicode by including things like full Unicode support in regex (u flag, Unicode property escapes in ES2018), and earlier, code point escapes (\u{1F600}) and String.codePointAt/fromCodePoint in ES2015 to properly handle astral symbols (like emoji). This was driven by the recognition that the web is global and emojis/characters outside BMP are common. - Accessibility: The late 2010s saw increased emphasis on web accessibility. Frameworks began to integrate a11y checks (e.g., React’s dev mode warns if you use <img> without alt text). Tools like ESLint have a11y plugins to catch issues (like improper ARIA attributes). There was also more guidance for SPA frameworks to ensure their output was accessible (focus management, semantic tags). Libraries like Angular and React generally left the semantics to developers but provided escape hatches or warnings. The community (driven by initiatives like the a11y project) emphasized that single-page apps need to be as accessible as multi-page apps (e.g., handling focus when the route changes, announcing content changes via ARIA live regions). - Assistive tech vendors worked more with browsers to support dynamic content changes; by 2019, the compatibility of ARIA with SPAs improved, though not perfect.
Sociological Notes (2016–2019): - Open Source Governance: jQuery Foundation had merged with Dojo Foundation in 2016 to form the JS Foundation, which then merged with the Node.js Foundation in 2019 to become the OpenJS Foundation under the Linux Foundation. This umbrella houses many key projects (jQuery, Node, webpack, ESLint, etc.), providing neutral governance. It signals JavaScript’s ecosystem maturity that vendor-neutral orgs handle core infrastructure. Node.js by now has a stable Technical Steering Committee governance model. - npm Inc had some turbulent times (changing CEOs, etc.), but an important event: in 2020 (just after our period) npm was acquired by GitHub/Microsoft[44]. This alleviated worries about npm Inc’s financial stability and integrated npm with the largest code host. - Community Culture: With increasing complexity, the concept of "JavaScript fatigue" emerged mid-2016 – developers joked about the ever-changing landscape of frameworks and tools. However, by 2018–2019, things stabilized a bit: React/Vue/Angular dominated, and the tooling around them (Webpack, Babel, etc.) became standard and well-documented. The community also became more welcoming to beginners with resources and a push for inclusivity (many JS conferences adopted codes of conduct in this period). - Stack Overflow and Learning: JavaScript became the most-tagged and viewed topic on Stack Overflow by late 2010s, reflecting its massive user base. The language’s quirks (like == vs ===, or the event loop details) were increasingly well-understood thanks to countless articles, and even fun explainers (e.g., Philip Roberts’ “What the heck is the event loop?” talk in 2014). ECMAScript proposals were now tracked eagerly by devs (with sites like TC39 stage 3 proposals updates on Medium/InfoQ). - State of JS Survey began in 2016. It became an annual barometer of trends (though often noted to be skewed toward early adopters and English-speaking respondents). For example, these surveys showed React overtaking AngularJS, the surge of TypeScript, the interest in GraphQL (a query language for APIs from 2015) etc. They also inadvertently highlighted geographic biases – e.g., Angular might be underrepresented due to survey demographics vs known corporate usage. But the survey sparked discussions and helped identify trends (like saying “maybe the community is moving toward X”).
By the end of 2019, JavaScript had solidified its place not just in the browser, but across the stack and computing paradigms. The ecosystem was simultaneously extremely deep (with specialized tools for everything) and broad (running on microcontrollers with Node/Johnny-Five, powering machine learning with TensorFlow.js, etc.). However, complexity had grown as well. This complexity prompted the next wave of innovation around 2020–2022, focusing on performance optimizations (like new bundlers and dev servers for speed) and rethinking how to send less JavaScript to browsers (islands architecture, resumability, etc.).
The year 2019 also closed a chapter with some counterfactual questions seemingly answered: e.g., What if JavaScript wasn’t fast enough? – V8 and others proved it could be fast, enabling use-cases like Node and complex apps. What if we had static typing in JS? – TypeScript provided that without forking the language. What if modules never unify? – by 2019, it appeared that ES Modules would unify front and back ends after all, albeit slowly. Yet, new “what-ifs” were arising (like “what if JS had built-in reactive primitives?” or “what if we target WebAssembly for performance-critical parts?”).
Next, we look at 2020–2022, where JavaScript faces new challenges and opportunities: the rise of ultra-fast build tools, new runtimes like Deno and Bun, the push for “modern JS only” development, and the influence of the global COVID-19 pandemic on web development practices (more online collaboration, maybe more open source). We will also see how meta-frameworks mature and new ideas in fine-grained reactivity and edge deployment come to the forefront.
2020–2022: Modern Build Tools, Deno & Bun, ESM Everywhere, and the Performance Renaissance Link to heading
The early 2020s have been an exciting and fast-evolving period for JavaScript. As the ecosystem matured, attention shifted to developer experience (DX) optimizations – making tools faster and workflows smoother – and to runtime innovations that challenge Node.js. Additionally, trends like “JavaScript less” (shipping less JS by leveraging SSR/SSG or alternative tech) gained momentum as web apps strove for better performance on slow networks/devices. This period is also marked by consolidation in some areas (e.g., TypeScript basically becoming a standard part of development) and experimentation in others (new frameworks and approaches to rendering).
ES2020, ES2021, ES2022 – Smoothing the Rough Edges: Annual ECMAScript releases continued, adding useful but not groundbreaking features: - ES2020 introduced the nullish coalescing operator (??) and optional chaining (?.), both eagerly awaited because they directly address common coding pain. For example, foo?.bar?.baz safely accesses nested properties, avoiding verbose if checks or custom util functions – a big win for code clarity when dealing with potentially undefined values[28] (the popularity of these features was indicated in surveys, with static typing and dealing with null being top pain points[28]). Also ES2020 added BigInt (a new primitive type for arbitrarily large integers, enabling use cases like cryptography or high-precision timestamps that exceed Number’s 53-bit safe integer range). Other ES2020 goodies: Promise.allSettled (to wait for all promises regardless of fulfill/reject), globalThis (a unified global object alias across environments), dynamic import() (officially making the import function asynchronously return a Promise – crucial for module code-splitting). - ES2021 gave us logical assignment operators (&&=, ||=, ??=) – minor but handy shortcuts – and String.prototype.replaceAll, which was a long-requested utility to avoid regex or manual loops for replacing all instances of a substring. It also included Promise.any (resolve to first fulfilled promise, complementing Promise.race which can also settle on rejection) and weaker references / finalizers (an advanced GC-related feature for certain memory management patterns). - ES2022 (finalized mid-2022) added class fields and private class methods/fields as part of finally standardizing class features that TypeScript and Babel had provided. Now one could write class C { #x = 10; } for a truly private field. It also introduced Array.prototype.at (negative indexing convenience), Error.cause (to chain error causes), and top-level await (allowing await usage at module top-level in ESM, simplifying module initialization code for dynamic imports or other setup)[45][46]. Top-level await in particular was significant for modules that needed to, say, fetch initial data or load a wasm module – previously one had to either do it before starting the app or use hacks; now modules themselves could await something during import[47][48]. This required adjusting how module loading works (so that an importing module can wait for a dependency’s top-level await), but by 2022, engines supported it.
These additions reflect the continued refinement of JavaScript, often bringing capabilities developers either had via transpilers or simply wished the language had. Private class fields, for example, had been a long TC39 discussion (with differing proposals since 2015) to bring true encapsulation. The chosen syntax (#) was a bit controversial but ultimately implemented, addressing concerns from library authors about things like building truly private state in classes (which was previously done via WeakMaps or closures).
One notable proposal in progress by 2022 was Temporal, a modern date/time API to replace the notoriously flawed Date. Temporal (at Stage 3 by 2022) came from recognizing that Date was broken (e.g., mutable, time zone issues, etc.) and libraries like moment.js (which was being deprecated in favor of date-fns or Luxon or others) had limitations. Temporal provides a comprehensive set of types for Absolute times, ZonedDateTime, Date, Time, etc., and is immune to the usual DST and mutability pitfalls. As Patrick Brosset’s analysis of State of JS 2024 notes, Temporal cannot come soon enough given that “date management” was a top pain after static typing and browser support[49]. By 2022, developers eagerly awaited Temporal (which might land in ES2023 or ES2024).
The 2020 Performance Renaissance: Lightning-Fast Build Tools and Dev Servers: A striking development in 2020 was a wave of new tools focused on raw speed, often leveraging languages like Go or Rust for performance: - esbuild – a JS bundler/transpiler written in Go by Evan Wallace, introduced in early 2020, that could bundle and transpile JS/TS at astonishing speeds (on the order of ~100× faster than Webpack/Babel in some cases)[50][51]. Its secret was using Go’s efficiency and bundling architecture that was designed for speed over completeness (in early days it didn’t handle every edge case but was “good enough” for a lot of scenarios). esbuild showed that the JS tooling written in JS might be nearing its limits performance-wise because, ironically, dynamic languages are slower at CPU-bound tasks like parsing and code transforming. This inspired others and also got integrated: for instance, esbuild became a core under-the-hood piece of many tools (like Vite’s dev server, see below). - swc – a JS/TS compiler written in Rust (by Bun’s creator, Tobias K., originally separate) similarly aimed to replace Babel with a much faster alternative. swc was used by projects like Next.js (replacing Babel) to speed up builds significantly. Rust’s memory safety and performance made swc a reliable, fast choice. - terser (successor to UglifyJS) was still the main minifier, but there were experiments like esbuild’s own minifier, and Google’s Closure compiler still existed for advanced optimizations. But the speed focus often meant using simpler but faster minifiers (esbuild’s approach). - Vite – introduced by Evan You (creator of Vue) in mid-2020[52], Vite is a dev server and build tool that leverages ES modules in the browser for dev. The idea: during development, don’t bundle at all – just serve source files as modules with transformations done on the fly (via esbuild for speed). This means instant server start (no big bundle generation) and lightning fast HMR (Hot Module Replacement) because updating a module doesn’t require rebuilding a giant bundle, just that module. Vite uses native ESM import in browsers (which by 2020 was supported in modern browsers) to make requests for modules as needed. It falls back to building a production bundle (using Rollup under the hood) for deployment. Vite’s approach was possible now that we could assume modern ES module support in dev (for older browsers, you’d still run a build).
Vite quickly gained popularity beyond the Vue world – it supported React, Preact, Svelte, and more via plugins. It essentially offered a much-improved DX: near-instant hot reload and much simpler configuration compared to heavy Webpack setups. Vite represented a new philosophy: leaning on the platform (browser ES modules) and using faster compilers (esbuild) where bundling or transpiling is needed. By 2022, many projects and frameworks either moved to Vite or similar dev-server architectures. Even Webpack, while still widely used (especially via CRA and older projects), started to see competition from these leaner setups. - Snowpack (2020) was another similar idea (ESM-based dev server), which inspired Vite but ultimately Vite overshadowed it. Snowpack pivoted to a framework-agnostic build tool and then was sunset as its concepts merged with others. - Parcel 2 (2021) came with Rust-based core for faster performance too, showing a trend of rewriting critical parts in systems languages. - These innovations significantly reduced the notorious slow feedback loop in dev. For large codebases where Webpack incremental builds might take many seconds, Vite could update in under a second. This productivity boost was very well-received.
Convergence of ESM – Node.js and Browsers at Last: By 2022, ES modules (ESM) were the standard for front-end code (with build tools, or native in browsers for simpler cases), and Node.js had made ESM usable (Node v14 LTS had it behind flag; v16 (2021) made it stable, v18 (2022) even enabled importing JSON modules, etc.). The ecosystem still had to catch up (many Node packages remained CommonJS only). Tools like webpack 5 introduced a mode to output ESM bundles and to treat packages that specify "type": "module" appropriately. A challenge was the dual package hazard (some packages shipped both CommonJS and ESM builds leading to potential duplication or mismatched behavior if not carefully handled). But overall, the momentum was towards ESM. Deno of course was ESM from the start (no CommonJS support at all). Build tools like Vite and Rollup were inherently ESM-centric. Node’s own built-in module loader had to handle mixing (they opted for requiring a .js file to be one or the other depending on context). By 2022, newer projects often went full ESM, and major libraries were starting to publish ESM builds or move to ESM only (for example, Preact v11 and modern lodash offered ESM builds).
This unified module space meant the vision of “universal JavaScript” came closer: same syntax and modules work everywhere. It also allowed browser tools to do smarter optimizations (like tree-shaking based on static imports). Some friction remained, like how to handle __dirname in ESM (Node introduced import.meta.url) and interoperability (Node allowed import of CommonJS and a createRequire for some cases).
Edge Computing and Cloudflare Workers: The deployment paradigm also shifted somewhat. Cloudflare Workers (launched 2017, matured by 2020) allowed running JS (using the V8 isolate API, not Node) on the edge (at Cloudflare’s data centers globally). They had a different API (closer to Service Workers) and imposed a tight CPU time limit (50ms per request). Yet, they enabled new types of apps (like very fast serverless functions, or on-the-fly rewriting of HTML at the CDN). Other providers followed, e.g., Fastly’s Compute@Edge (with JS/WASM support), and Deno Deploy (running Deno on edge). These environments typically don’t have a full Node.js API (no file system, etc., for security and performance), which in turn influenced framework authors to rely more on standard web APIs (like using fetch which is available in these, instead of Node’s http or axios by default). By 2022, some frameworks (like Next.js, SvelteKit) could target these edge runtimes for specific features (like running middleware or API routes on the edge for speed).
Alternative Runtimes – Deno and Bun Shake Things Up: - Deno reached 1.0 in May 2020 (timed interestingly during pandemic start). It offered a fresh take: secure by default (you must grant permissions to read files or env or net), TypeScript out-of-the-box (it can execute TS directly, doing a behind-the-scenes compilation), and uses ESM modules (with imports by URL or file path, not needing a package.json). Deno also includes a built-in test runner, linter, and formatter – an attempt to provide a batteries-included environment so one doesn’t need a plethora of external tools for basics. Despite hype, Deno did not overthrow Node (which has a huge incumbent advantage), but it steadily improved. It found use in some niches, and influenced Node (Node added an experimental --experimental-permission flag inspired by Deno, and considered more web API compatibility). - Bun – a surprise entrant announced by Jarred Sumner in 2021 (with early access by 2022). Bun is a JS runtime written in Zig (a low-level language) and it aims to be an all-in-one: it implements Node’s APIs (many of them), a bundler, a transpiler, and even an integrated SQLite database binding. Bun’s highlight is speed: initial benchmarks showed extremely fast startup and throughput (often faster than Node and Deno by multiples) and very fast npm install (it has its own package manager that uses symlinks like pnpm). Bun’s performance comes from using JavaScriptCore (the WebKit engine) instead of V8, and aggressive native code in Zig. By 2022, Bun was still in beta but getting a lot of attention as a potential challenger to Node for development and possibly production. It’s unusual in using WebKit’s JS engine (few expect that in server), and it raises interesting questions about engine diversity – for years V8 (and its derivatives ChakraCore, etc.) was the only viable engine for servers; Bun showed JSC could compete in certain scenarios.
It’s too early to know how widely Deno or Bun will be adopted long-term. Node remains deeply entrenched. But they have already had positive influence: Node has been incorporating more web APIs (like fetch globally in Node 18, inspired by Deno’s early adoption of fetch; also Web Streams API in Node 16+ to align with what browsers and Deno have). Competition is pushing Node to modernize and standardize.
Frontend Frameworks 2020–2022: React’s Continuing Evolution, Newcomers in Fine-Grained Reactivity and Islands: - React: Maintained its dominance. Major developments included: React Hooks (2019) fundamentally changed how React code is written (class components became much rarer; functional components with hooks became standard). Hooks made state management more composable – custom hooks allow sharing logic easily. React’s core team also worked on Concurrent Mode (later called concurrent features), culminating in React 18 (2022) which introduced a new rendering engine capable of interruptible rendering and new APIs like useTransition and Suspense improvements to better handle asynchronous data fetching. Notably, Suspense for data fetching was a big theme – the idea of throwing a Promise from a component to indicate it’s loading and let React suspend was refined. React 18 also enabled Server Components (still experimental in 2022): a new paradigm where some components run on the server (with full access to backend data) and output a serialized tree to be merged with client components. Server Components aim to combine the benefits of SSR (no large bundle for that component on client) with fine granularity. This is in response to performance concerns – a way to reduce JS shipped by not shipping logic that can run on server. - Meta-Frameworks: Next.js continued to rise, and in 2020–2022 it added features like Incremental Static Regeneration (ability to rebuild pages on the fly in production) and closer integration with Vercel’s edge functions. Other frameworks in this space: - Gatsby slowed somewhat, with some users moving to Next for more flexibility or to newer SSG tools. - Remix (by React Router team, released 2021) emerged as a contender focusing on web fundamentals: it embraces native browser capabilities (like using <form> and progressively enhancing it, rather than relying only on client JS), and it has an innovative approach to routing and data loading (with loaders and actions that run on server but with an API that looks almost like client). Remix’s philosophy is closer to “let the server handle more, use JS for enhancement” which aligned with performance goals. - SvelteKit (the full-stack framework for Svelte, in development and in wide beta by 2022) applied Svelte’s reactive approach to a Next.js-like framework, with great attention to DX and performance (since Svelte is compiled, its output is minimal). - Astro (2021) championed “islands architecture” explicitly: by default, Astro renders components to HTML at build time and doesn’t ship JS for them, unless you mark a component as interactive, and even then, you can choose to hydrate it on interaction or idle or media query. Astro could also integrate React/Vue/Svelte components all in one site. It’s like a return to multi-page static sites, but allowing interactive islands. Astro gained popularity for content-heavy sites where using a giant SPA is overkill. - Fine-Grained Reactivity Frameworks: React (and Vue to an extent) updated the DOM in batches via virtual DOM diffing – an approach with overhead. An alternative pattern, fine-grained reactivity, traces back to Knockout.js and earlier, where you track individual dependencies and only update what’s necessary. New frameworks embracing this: - Solid.js (by Ryan Carniato, announced 2018, gained popularity ~2021) has a React-like API but under the hood is completely different: it creates no virtual DOM, instead it creates signals and effects such that when a piece of state changes, only the directly affected DOM updates. This yields extreme performance and low memory usage (Solid often topped benchmarks). - Qwik (by Mishko Hevery, creator of AngularJS, launched 2022) took fine-grained reactivity and combined it with a novel concept of resumability: Qwik applications are rendered on the server and delivered as HTML with minimal JS; rather than hydrating on load (replaying all components on client to attach event listeners), Qwik’s compiler serializes the application’s state into the HTML. When the user interacts, only then the relevant component’s code is fetched (lazy-loaded) and it “resumes” from where the server left off. This avoids the big cost of hydration and can make even large apps very fast on first load. Qwik’s approach is complex but promising to drastically cut JS payloads. - Vue continued solidly (Vue 3 released 2020 introduced a composition API similar to React hooks for better logic reuse). Vue’s community remained strong, especially where simplicity is valued. - Angular (v13 by 2022) persisted in enterprise. It embraced Ivy (a new rendering engine in 2020) and gradually became more aligned with modern standards (e.g., dropping old IE support, embracing more RxJS for state).
The fact that new frameworks like Solid and Qwik are cropping up shows that front-end innovation hasn’t slowed – it’s addressing the performance concerns of SPAs (just as React originally addressed maintainability issues of two-way binding frameworks). They often show impressive results in core web vitals (Google’s metrics for site speed). Whether they dethrone React in popularity remains to be seen, but their ideas are influencing the major frameworks too. For instance, React is exploring a “Signals” concept possibly similar to Solid’s signals for a future fine-grained reactive system.
Back to the Server?: Another trend in early 2020s is a renewed appreciation for doing more on the server – partly spurred by performance (reduce JS) and partly by the rise of serverless (easy to run server code on demand). We see frameworks offering more seamless integration of server-side logic: - Next.js 12 introduced middleware and edge functions, and Next 13 (late 2022) introduced an app directory that by default does more server rendering and streaming (leveraging React 18’s new features). - Remix fundamentally encourages server for data mutations and uses progressive enhancement for forms as noted. - RPC-like frameworks: tools like Blitz.js (built on Next, briefly popular around 2020) tried to provide a fullstack Rails-like experience with direct function calls from client to server (abstracting fetch calls).
GraphQL & Alternative Data Fetching: While not core JS language, data layer developments influenced JS apps: - GraphQL, introduced by Facebook in 2015, became mainstream by 2020 for APIs. GraphQL allows flexible queries from client to server, and many JS developers started using GraphQL via libraries like Apollo Client or urql. GraphQL’s strong typing also likely increased appetite for typed frontends (complementing TS). - However, GraphQL also saw some pushback (complexity, overfetching on server side, caching issues), and there’s a parallel trend to simply use simpler patterns: e.g., React Query/TanStack Query became popular for REST – a library that manages server state and caching, making data fetching simpler without GraphQL. - Libraries like tRPC (2021) emerged, allowing you to call server functions directly from client with full TypeScript type inference (server defines shape, client gets types). It uses RPC over HTTP under the hood. This resonates in the fullstack TS communities as a simpler alternative to GraphQL for internal APIs.
Impact of COVID-19: The pandemic starting in 2020 accelerated digital transformation, which meant more demand for web apps and collaboration tools – indirectly boosting JavaScript usage even further (e.g., many companies that were not tech-heavy had to invest in web). It also forced conferences online – JSConf and others went virtual or paused. The community found new ways to collaborate (virtual meetups, more open source contributions perhaps as people were home). Stack Overflow noted increased traffic. Hard to quantify, but certainly the importance of JavaScript in enabling remote work, education, etc., was felt (just consider Zoom’s web client or Google Meet, etc., all JS heavy apps).
Notable Security & Ecosystem Events (2020–2022): - Supply Chain Security got even more attention after incidents like SolarWinds (a targeted attack on build systems) and npm “protestware” in 2022: for instance, the maintainer of node-ipc (a popular dependency in Vue CLI) shipped a version that detected if it was running in Russia or Belarus and then printed a protest message and possibly overwrote files as protest against the war in Ukraine. Earlier in 2022, a popular colors library had an intentionally breaking update by a maintainer upset with corporate usage of OSS. These incidents highlighted a new vector: maintainers themselves sabotaging their packages, whether for political statement or frustration. It caused a scramble in the ecosystem to pin dependencies and consider vetting maintainers. GitHub (which now owned npm) started initiatives for npm package signing and verification, and improved 2FA enforcement for maintainers of popular packages. The OpenJS Foundation and others held discussions on how to fund and support maintainers to reduce burnout-driven events. - Log4j vulnerability (Dec 2021) in the Java world reminded everyone that even non-JS issues can impact JS indirectly (e.g., Node servers using Java services). - WASM (WebAssembly) by 2022 became a stable part of web – though primarily used in niches (like Figma using WASM for some editing operations, or games compiled from C++). WebAssembly didn’t replace JS (it’s more complementary for heavy compute or using libraries from other languages). Node.js can also run WASM modules, and there’s exploration of WASM for serverless because of quick startup. The JS-WASM interop continued to improve (with proposals like WASM GC to allow high-level interop). But for most web devs, WASM in 2022 was not part of daily coding, except that some libraries might be transparently using it (e.g., an image processing lib might use WASM under hood for speed). - ESLint and others: In 2022, we saw some core maintainers stepping back (e.g., ESLint creator Nicholas Zakas handed over to a team, TSC, etc.). The maturity means projects have governance in place so they continue smoothly.
In summary, the 2020–2022 period has been one of fine-tuning and improving the developer experience and performance: - The language is addressing longstanding minor annoyances (nullish, etc.). - The tools are making development faster and more pleasant (Vite, esbuild, etc.). - New runtime options (Deno, Bun) challenge the status quo and bring fresh ideas. - Frameworks are focusing on delivering better UX to users – by shipping less JS (SSR, islands, resumability) and by using smarter reactivity for snappier UIs. - TypeScript is almost assumed in many contexts – it’s less a question of “JS or TS” and more “how strict do we want our TS”. - JavaScript remains the lingua franca of the web, and also now a major player on the server and in native apps (via Electron/React Native). The economy around JS (jobs, tooling vendors, etc.) is huge, which also means changes are more measured (backcompat and stability are more valued now than the wild west experimentation of earlier days).
Finally, we look to 2023–2025: the present and near-future, where we expect further consolidation of these trends: perhaps widespread adoption of those performance-oriented frameworks, new standards like Temporal, maybe greater integration of JavaScript with emerging fields like AI/ML (e.g., TensorFlow.js), and possibly, a consideration of how JavaScript competes or cooperates with languages like WebAssembly or new paradigms. We’ll also consider some counterfactuals in hindsight to appreciate how far JS has come.
2023–2025: The State of JavaScript Today and Future Outlook (Bun, WebGPU, Consolidation of Meta-Frameworks, and Emerging Paradigms) Link to heading
*(This section covers the most recent developments up to 2025, with an eye on where things are heading. It is written with the understanding that some events are still unfolding.)
As of 2025, JavaScript stands as a robust, multifaceted ecosystem. The past couple of years have seen incremental improvements to the language and major implementations, as well as the fruition of ideas that were in experimental phases earlier. We also observe a consolidation in some areas – for example, certain “meta-frameworks” now dominate how we build large web applications – while entirely new capabilities (like WebGPU) are expanding what JavaScript can do.
Bun 1.0 and the Competitive Runtime Landscape: By mid-2023, Bun reached a 1.0 release, signaling it’s ready for production use. Bun has made significant progress implementing Node.js APIs (to ease migration) and claims impressive performance for web servers and tooling tasks. For example, Bun’s integrated bundler and test runner offer speed benefits over Node-based counterparts, and its bun install (npm client) is much faster than npm/Yarn[53]. While Node.js still reigns in enterprise usage, Bun has gained traction among developers seeking speed and a modern standard library out-of-the-box. Deno also continued evolving (with Deno v1.30+ adding npm package compatibility by embedding a Node compatibility layer, bridging the gap to Node’s vast package ecosystem). The runtime competition has effectively broken Node’s monopoly. This competition yields benefits: Node has adopted more web APIs (fetch, Web Crypto, File API etc.), improved performance (Node 20 upgraded V8, and Node is experimenting with a built-in test runner and stricter module resolution to match convenience offered by Deno/Bun). It’s plausible that in the next few years, we’ll see cross-runtime package standards – already, many tools run in Node or Deno or Bun with minimal changes, as long as they stick to standard APIs.
One interesting development is serverless and edge runtime adoption: Cloudflare Workers, Vercel’s Edge Functions (powered by V8 isolates), and Deno Deploy are being targeted by frameworks. By 2025, it’s normal for a full-stack framework to let you choose between Node or Edge runtime for server-side logic. This influences what APIs you use (e.g., fetch and Response Web API instead of Node’s http module, since the former works on edge runtimes). Thus, browser-like standard APIs are increasingly the lingua franca, fulfilling the goal of “isomorphic” code that can run in many environments.
ECMAScript 2023 and 2024 Features: Continuing the yearly cadence, ES2023 standardized features that have been in discussion: - Array findLast and findLastIndex: handy complements to find that search from the end of an array. - Array toSorted, toSpliced, etc.: non-mutating array methods (e.g., toSorted returns a sorted copy, rather than sorting in place like sort does) as part of a general move to provide pure functions to avoid accidental mutations. - Symbols as WeakMap keys and some expansion of RegExp .indices were included. - Possibly the most impactful is the formal introduction of Temporal (if it landed in ES2024; it was in Stage 3 for ES2023 and likely Stage 4 by ES2024). As anticipated, Temporal provides a much-needed modern date/time API[49]. Once in the language, developers can manage time zones, calendar systems, and date arithmetic without the footguns of Date. We expect by 2025, Temporal is widely adopted, phasing out moment.js and other legacy date libraries. One can imagine a transitional period where both exist, but given Temporal addresses many pain points (like correct UTC vs local time handling, immutability, etc.), it should become the go-to. - Decorators are on the cusp: after many iterations, TC39 achieved consensus on a decorator proposal (Stage 3 by 2022). If it gets finalized in ES2025, it will allow declarative annotations of classes and class members with meta-programming logic. Decorators have been used heavily in frameworks (Angular’s @Component, TypeORM’s @Column, etc. via TypeScript experimental support), so a standard will legitimize and unify those patterns. Expect frameworks to adjust to the new standard decorators (which have some differences from TS’s legacy experimental ones) – possibly requiring codemods but resulting in more interoperable code. - Pattern Matching (switch on steroids): There’s an advanced proposal for pattern matching (inspired by Scala/Rust) that has gone through ups and downs. If it gets in by 2025, it would allow powerful destructuring and matching in a switch-like syntax. This would cater to functional programming styles (akin to a match expression). It’s not final yet as of 2023, but still actively discussed.
By 2025, the JavaScript language is largely “complete” in terms of major desires – meaning, most features people commonly ask for exist either in the language or as a well-supported proposal. There will always be new proposals, but the era of huge gaps (modules, async/await, etc.) is over. The focus now is on ergonomics (like do expressions to allow if/try as expressions, etc.), performance (like making JS more optimize-able through new primitives), and specific domains (like better support for binary data, possibly implementing things like Record & Tuple immutable data structures for efficient deep immutability). Records and Tuples, which allow deeply immutable and comparable-by-value structures, are in Stage 2 as of 2025. If eventually added, they could address use cases where developers currently use Object.freeze or persistent library structures for reliability.
Web APIs and Browser Advances: - WebGPU: After a multi-year development, WebGPU 1.0 was released in 2023 as the successor to WebGL[54]. WebGPU provides a modern, low-level graphics and compute API that gives web code access to GPU capabilities with performance close to native. It significantly expands what can be done: advanced 3D graphics, machine learning computations on GPU (GPU-accelerated TensorFlow.js, for instance), and other parallel tasks. By 2025, WebGPU is available in Chrome, Firefox, and Safari (with some likely prefix or flags at first). While still a specialized feature (used by game engines, data visualization libs, ML libs), it underscores that JavaScript (or rather, the Web platform accessible from JS) can now leverage hardware to a degree unimaginable in 2010. Frameworks like Three.js are adapting to optionally use WebGPU for rendering for better performance. We might also see entirely new applications (like Figma’s next-gen graphics engine) take advantage of WebGPU for a smoother experience. - WASM GC and Component Model: WebAssembly has been evolving to better integrate with JS and to support high-level languages. The GC proposal (garbage collection) aims to let WASM directly allocate and manipulate managed objects, making languages like Kotlin or C# compile more efficiently to WASM. It also would allow WASM and JS to share objects more easily. By 2025, it’s possible that more of these capabilities are landing, further cementing WebAssembly as a companion to JS for performance-critical parts. There’s also a WebAssembly Component Model in development, to allow binary modules to interoperate (for example, a Rust WASM module easily calling a Go WASM module). Over time, this could make the web runtime a polyglot VM where JS is the glue. - Off-main-thread everything: To improve perceived performance, browsers and standards are exploring moves to offload work off the main thread. For example, background JavaScript via Worklets (AudioWorklet, PaintWorklet, etc., already exist). There’s discussion of a main-thread scheduler API to allow cooperative multitasking (giving JS more control to yield to avoid jank). Frameworks have their own schedulers (React’s concurrency uses scheduler microtasks), but a standardized one could help multiple libraries play together nicely. - HTTP/3 and server push: Protocol improvements like HTTP/3 (based on QUIC) are now widely deployed. This indirectly benefits JS apps as it reduces latency and head-of-line blocking. Also, broadband and device improvements in many markets mean bigger JS bundles are more tolerable than before, but conversely, expansion to new users (global south on mobile) means performance remains critical – hence the push to minimize JS still matters. We have a divergence of environments: high-end devices that can handle a 5 MB bundle, and low-end devices or emerging markets where every KB and parse cost matters. Frameworks thus often provide options to optimize for both (e.g., a baseline fast experience, enhanced if device is powerful). - Same-site Security and Privacy: The web in early 2020s is heavily shaped by privacy: e.g., browsers phasing out third-party cookies by 2024, which influences analytics and ads. For JS developers, you see more usage of Content Security Policy (CSP) and related measures. Supply chain integrity may lead to more usage of Subresource Integrity (SRI) for third-party scripts and perhaps digital signatures on npm packages (GitHub has a beta for signing packages). By 2025, it might even be that package managers can enforce signing for critical packages to mitigate tampering.
Meta-Framework Consolidation: The market of meta-frameworks (Next, Nuxt, Remix, SvelteKit, Angular Universal, etc.) is competitive but also collaborative. Notably, Remix was acquired by Shopify in 2022, providing resources for its development. Next.js is now quite mature and is expanding beyond its React roots (for instance, the Next 13 “App Router” is a significant overhaul using React’s latest features, and it’s blurring lines between static and dynamic). Nuxt 3 (released 2022) does for Vue what Next does for React. SvelteKit 1.0 launched in late 2022, giving Svelte a full-stack story with excellent performance. Angular got into the act with analogues (like Angular Universal for SSR, and a project called Angular hydration in the works). Given these, by 2025 developers have a rich choice of integrated frameworks that handle routing, data fetching, and rendering strategy. The decision often comes down to team preference and specific project needs.
We might see convergence in ideas: e.g., React’s team adopting more of Remix’s conventions (the React Router v6 and Next’s new router are reminiscent of Remix’s nested routing and loaders). Similarly, SvelteKit and Nuxt introduced their spins on islands (partial hydration). It’s likely these frameworks will learn from each other and gradually certain patterns become standard: for instance, nearly all will support some form of auto code-splitting and lazy hydration, and likely integrate edge deployment (cloudflare or vercel edges) as a first-class target. This means as a developer in 2025, you worry less about manually optimizing routing or splitting – the framework handles it or has conventions to do so.
State of JS and Community in 2025: The JavaScript community remains huge and diverse. If anything, roles have become more specialized: e.g., a “frontend developer” in 2015 might have been expected to do everything from design CSS to writing build configs, but by 2025, large companies have specialization like “frontend infrastructure engineer” (working on build tools, design systems), “full-stack product engineer” (using the meta-framework to deliver features end-to-end), etc. On the other end, the barrier to entry for simple web projects is actually lower thanks to tools like Vite (you can start a project with zero config and modern JS features that just works in browser). The ecosystem has a bit of everything: high complexity in some corners, and much simplicity regained in others (e.g., a Svelte project often feels simpler than an equivalent React project with heavy state management libraries).
AI and JavaScript: An interesting new influence is the rise of AI coding assistants (like GitHub Copilot, based on large language models). By 2025, many JS developers use AI to write boilerplate or get suggestions, which might somewhat mitigate the complexity by offloading rote tasks. There are also AI frameworks in JS – TensorFlow.js and others – allowing ML models to run in-browser or on Node. As ML becomes common, JS might not be the primary language for training models (Python holds that crown), but for deploying models (inference) on web clients, JS can be crucial (for privacy or latency reasons, running a model in the browser is attractive). We already see things like web-based image generators or audio processing using WASM/JS combos. This may push JS to add capabilities or at least push the Web platform (like WebGPU aiding AI by enabling fast parallel compute for model inference).
Alternate Histories (Counterfactuals) Reflection: Looking back from 2025: - The decision in 2008 to abandon ES4 and go with ES5/ES6 harmony was pivotal and, in hindsight, correct. If ES4 (with optional static types, etc.) had shipped, it might have fragmented the web (since not all would implement it). Instead, gradual evolution kept the ecosystem together. The absence of built-in types in JS eventually led to TypeScript filling the gap in a non-breaking way – arguably a more flexible outcome (teams that want types can use them, those who don’t can ignore). Possibly if ES4 had happened, we might have had a statically-typed JS by 2011, but it would likely have caused either a fork (maybe Microsoft sticking to ES3, etc.) or complexity that many devs wouldn’t use (like how C++ added lots of features not all use). - If Google hadn’t created V8 or Chrome, it’s doubtful JS would be as dominant today. V8’s performance push forced everyone’s hand. Without it, perhaps a language like Dart (which Google attempted in 2011 as a potential successor to JS) might have gained traction or Microsoft’s TypeScript might have been positioned as a separate runtime. In a no-V8 world, maybe Silverlight/Flash would have continued longer for rich web content, and Node.js might not exist (as it was built on V8’s performance). So V8’s creation was a linchpin that kept JavaScript at the center of the web revolution. - If Node.js hadn’t succeeded, the “JS as universal language” idea would have been weaker. Perhaps we’d see more polyglot development (Ruby/Python on server + JS on client). Instead, Node allowed JavaScript developers to become full-stack and brought an immense number of web developers into server-side programming. One alternate reality: if not Node, maybe by 2020 we would have gravitated to using something like WebAssembly on the server to unify front/back (e.g., use Rust or C# compiled to WASM on both client and server). But Node happened and filled that niche with JavaScript itself. - One can consider: What if browsers had agreed on a single JS engine (like all adopting V8)? That didn’t exactly happen (though through Chrome’s popularity and Edge moving to Chromium/V8 in 2019, effectively V8 runs a huge portion of the web). In our timeline, multiple engines (V8, SpiderMonkey, JSC) keep each other in check, and Node’s use of V8 meant web and server JS benefitted together from V8’s improvements. - What if WebAssembly (2017) threatened JS’s role on the web? That hasn’t happened. WASM is great for heavy compute and enabling other languages, but it does not replace JS for the high-level glue and UI manipulation (not to mention WASM modules have no direct DOM access without JS interop). Instead, JS and WASM are complementary – exactly as envisioned by those who created WASM as a compilation target, not a hand-written language. So the scenario where JS loses primacy on web because of WASM hasn’t materialized by 2025, and likely won’t, because WASM works best with a managing layer (often written in JS). - Another “what-if”: What if TypeScript (or Dart, etc.) had been standardized as the new web language? TypeScript chose to remain a transpiler and not fork JS’s runtime, which was wise. If an alternate syntax or language had been pushed (like Dart’s attempt with a Dart VM in Chrome), it probably would have failed due to backward compatibility concerns. The multi-tiered evolution (opt-in TS for those who want it, otherwise plain JS) meant no hard split. We see convergence instead: TC39 adopts some syntax that TS pioneered (like decorators, now formalizing). - On the business side: What if npm Inc had collapsed? It nearly did financially around 2019. GitHub’s acquisition in 2020[44] ensured the npm registry remained stable. If it had gone under, we’d have seen a chaotic scramble to mirror packages or move to an alternative (maybe Yarn’s planned Yarn 2 registry or something). Thankfully, the single source of truth for JS packages survived.
In conclusion, as we stand in 2025, JavaScript is stronger than ever – not by being static, but by continuously adapting. It absorbed ideas from other communities (module systems, async patterns, typing, etc.), responded to external pressures (security, performance), and benefitted from an incredibly large and passionate community of developers who push it in new directions (from tiny npm packages to massive apps, from web art to enterprise software). JavaScript’s story from 1995 to 2025 is one of unification (of a language standard, of an ecosystem through npm, of front-end and back-end development) and innovation born from collaboration (browser vendors cooperating on standards, open source projects jointly shaping best practices).
The language that Brendan Eich wrote in 10 days and once dismissed as a “toy” has become a linchpin of computing. It is the first programming language for many new developers worldwide and shows no sign of losing that position. If anything, the next decade might see JavaScript (and its web runtime) take on roles in IoT (with projects like Espruino, low.js for microcontrollers), in XR/Metaverse (using WebXR APIs for AR/VR), and beyond. The challenges ahead might include keeping the web open and performant under the load of ever more complex apps, and ensuring security as JavaScript drives an ever larger portion of the software supply chain.
JavaScript’s history teaches us that standards and practice have a symbiotic relationship: the best outcomes occur when formal bodies recognize and standardize grassroots innovations (like JSON, modules, async functions), and when practitioners respect the foundations enough not to fork the ecosystem (as seen by the failure of attempts to replace JS outright). This co-evolution will likely continue. The governance structures (TC39, WHATWG, W3C, OpenJS Foundation) are now well-established and generally cooperative; the community has channels to provide input (through proposals, GitHub, etc.). Thus, JavaScript’s future looks bright, with incremental improvements guided by real-world needs and an ecosystem flexible enough to incorporate paradigm shifts.
To encapsulate the journey: in 1995 JavaScript was a 10-day hack embedded in a niche browser; by 2025 it is a centerpiece of software development, running on virtually every computing device in some form. Its formal history (specifications, engines) and informal history (frameworks, patterns, culture) are now deeply intertwined – each influencing the other in a continuous feedback loop. Understanding this history is not just academic; it helps developers appreciate why things are the way they are (e.g., why we have both == and ===, or why module import/export was designed as it is[17]) and how to navigate the future changes with an eye on the past.
JavaScript’s evolution will likely be remembered as a case study in balancing backward compatibility with innovation – the language “that built the web” had to keep working for old sites while powering new ones. And it did so through a mix of foresight and adaptation: formal standards often anticipated needs (like ES6 modules preparing for large-scale codebases), but also wisely adjusted course when the community provided new insight (like embracing Promises from the web API world into the language spec[17]).
As of 2025, one might finally say JavaScript has reached a level of maturity and completeness where the phrase “post-JavaScript world” is nowhere in sight. Instead, JavaScript is continually reinventing itself, proving the mantra that “JavaScript always wins on the web” – not by fiat, but by being the living language that the web platform and its developers collectively nurture.
Annotated Timeline of Major JavaScript Milestones (1995–2025) Link to heading
- May 1995: Brendan Eich creates Mocha (JavaScript’s original codename) in 10 days at Netscape[2].
- Dec 1995: Mocha (LiveScript) renamed JavaScript in Netscape 2.0; Netscape and Sun announce JavaScript partnership[6][4]. Microsoft implements JScript in IE 3 (1996), igniting browser wars.
- June 1997: ECMA-262 1st Edition (ECMAScript 1) is standardized[12], based on JavaScript 1.1.
- June 1998: ES2 released (editorial fixes). Oct 1998: DOM Level 1 spec published (W3C), offering first standard DOM for JS.
- Dec 1999: ES3 released, adding regex, try/catch, stricter definitions[8]. Around same time, IE5 introduces
XMLHttpRequest(for AJAX) and IE5.5 addsinnerHTML(de facto standard by 2005). - 2001: Microsoft IE6 launches (dominant browser for years, but with stagnated JS support beyond ES3).
- 2002: Douglas Crockford introduces JSON (JSON.org)[14], which rapidly becomes the favored data interchange format in AJAX apps.
- 2004: Mozilla Firefox 1.0 releases (open-source Netscape successor, revives competition). Gmail’s launch popularizes AJAX techniques (2004), showing JS can build rich apps.
- July 2008: TC39 abandons ES4 after years of work; agrees on “Harmony” for future (leading to ES5 and ES6)[17].
- Sept 2008: Google releases Chrome with V8 engine – 10× faster JS execution than previous engines[22]. Browser performance wars begin.
- 2008–2009: All major browsers deploy JIT compilers (Firefox TraceMonkey 2008[20], Safari SquirrelFish Extreme 2008, IE9’s Chakra JIT announced 2009). JS speed drastically improves.
- Dec 2009: ECMAScript 5 finalized[21] – adds strict mode, JSON support,
Function.bind,Array.forEachetc., catching up to patterns in libraries. First major spec update in 10 years. - 2009: Ryan Dahl releases Node.js, bringing JS to server side using V8. npm package manager created (2010). This spawns the npm ecosystem and CommonJS modules.
- 2010: jQuery becomes the most used JS library on the web (released 2006, it had ~50%+ usage by 2011), simplifying DOM and AJAX across browsers.
- 2010–2011: Emergence of MVC frameworks – Backbone.js (2010), AngularJS (2010), Ember.js (2011) – to structure single-page applications.
- 2012: Microsoft releases TypeScript 0.8 – statically typed superset of JS. Slow start, but seeds future widespread use.
- 2013: React library released by Facebook (open-source). Introduces component-based UI and virtual DOM – will become dominant UI paradigm[28].
- 2014: TC39 adopts a new proposal process (Stages 0–4)[27] and a cadence aiming for yearly releases.
- 2015: ECMAScript 2015 (6th Edition) approved[30] – massive update with modules, classes, arrows, let/const, Promises, generators, Map/Set, etc. Transforms the language.
- 2015: Babel (formerly 6to5) popularizes transpiling, enabling developers to use ES2015+ before browsers support it. By this time, build toolchains with Webpack and Babel are common.
- 2016: ES2016 (ES7) released – minor updates (
**operator,Array.includes). Yarn package manager launched (Facebook) to improve npm experience. - Mar 2016: Left-pad incident – deletion of a tiny npm package breaks thousands of builds[55][37]. Triggers npm policy changes and cautionary tales about deep dependency trees.
- 2016–2017: Angular 2 (completely revamped, in TS) released; Vue.js 2 released – both gaining large communities. React consolidates as well with growing ecosystem (Redux, etc.).
- 2017: ES2017 (ES8) – introduces
async/await[28], making asynchronous code far easier. Also Object.values/entries, string padding, etc. Node.js 8 and browsers quickly adopt async/await. - 2018: ES2018 – adds rest/spread properties, async iteration, Promise.finally, etc. Webpack 4 and Babel 7 released, reflecting maturity of build tools.
- Nov 2018: Event-stream npm attack discovered[39][40] – highlights supply-chain security risks in npm ecosystem. npm responds by adding auditing tools[56].
- 2019: ES2019 – adds optional catch binding,
Array.flat,Object.fromEntries, etc. - 2019: TypeScript usage soars – over 50% of “State of JS” respondents now use it, with ~90% satisfaction[35]. Static typing becomes mainstream in JS dev.
- Apr 2020: npm Inc acquired by GitHub/Microsoft[44] – ensuring stability of the npm registry and integrating with GitHub infrastructure.
- 2020: ES2020 – notable for
??nullish coalescing and?.optional chaining (hugely popular additions), plus BigInt,Promise.allSettled, dynamic import, module namespace exports. - 2020: Deno 1.0 released – secure runtime and TS support out-of-box (Node alternative by Node’s creator). Snowpack/Vite appear, leveraging ESM in browsers for faster dev.
- 2020–2021: Lightning-fast tooling – Evan Wallace’s esbuild (Go-based bundler) and swc (Rust-based TS/JS compiler) demonstrate 10-100× speed improvements[50]. Integrated into many tools (Vite, Next.js).
- 2021: ES2021 – adds logical assignment (
&&=, etc.),Promise.any, stringreplaceAll. - 2021: React 17 (no new JSX API, focus on gradual updates) and groundwork for concurrent features. Meanwhile, new frameworks: Solid.js, SvelteKit (beta), Remix launch, exploring fine-grained reactivity and new routing paradigms.
- Jan 2022: JavaScript protestware – faker.js/colors.js maintainer intentionally breaks package. Mar 2022, node-ipc module incident[56]. Leads to increased emphasis on 2FA for npm publishers and package signing initiatives.
- 2022: ES2022 – adds class fields & private methods (finally standardizing that),
Array.at,Object.hasOwn,Top-level awaitin modules. - Oct 2022: Next.js 13 released – introducing React Server Components (experimental), Turbo modules, and broadening React’s meta-framework capabilities on edge and dynamic rendering.
- Nov 2022: Vue 3 becomes default (having introduced Composition API, proxies for reactivity). Angular 14 now fully Ivy and simpler module-less standalone components.
- 2022: Bun runtime (Zig-based) gains traction in beta, showing extremely fast performance in dev servers and package installs.
- 2023: ES2023 – includes
Array.findLast/findLastIndex,toSortedetc.,Symbol.prototype.description, and perhaps the much-awaited Temporal if not in ES2024[49]. - 2023: WebGPU 1.0 API released in Chrome[54] – giving JS direct access to modern GPU features (compute shaders, high-performance graphics). Marks a new era for web games and GPU-accelerated apps.
- 2023: Bun 1.0 ships – providing a third major runtime in addition to Node and Deno, with built-in bundler and test runner.
- 2024 (anticipated): Temporal API likely standardized (bringing modern date/time to JS), Decorators reaching the finish line, and continued annual updates focusing on developer ergonomics. Node.js 20+ might include features like built-in permission controls and more web API alignment to keep pace with Deno/Bun.
- 2025: JavaScript celebrates 30 years 🎉. It remains the dominant language of the web, with a rich multi-threaded, multi-platform runtime (thanks to Web Workers, WASM, and diverse engines). Discussions in TC39 and web standards continue on improvements like pattern matching, optimizing built-in classes (perhaps a future with types or contracts?), and maintaining security. The community by now sees JavaScript not as a single monolith, but as an ecosystem including TypeScript, WASM modules, and many build tools – yet all centered around the core that is ECMAScript.
(Each entry above is annotated with a year and key event, tying formal releases with informal ecosystem developments. Sources for specific claims are cited in context in the main text.)
Comparative Case Study: AngularJS (1.x) vs. React – The Shift in Framework Paradigms Link to heading
To understand the evolution of front-end frameworks, we examine the shift from AngularJS (the dominant SPA framework circa 2012) to React (which emerged mid-2010s and became the new standard). This transition encapsulates technical trade-offs and social dynamics in the JS community.
Background: AngularJS’s Approach (2010–2015) Link to heading
AngularJS, developed by Google and released in 2010, was a comprehensive “batteries-included” framework following the Model-View-Controller (MVC) or Model-View-ViewModel (MVVM) pattern. Key features and motivations of AngularJS:
- Two-Way Data Binding: AngularJS automatically synchronized the model (JavaScript objects) and the view (DOM). When an input field changed, the underlying model property would update, and vice versa. This was achieved via “dirty checking”: Angular would periodically (or on certain events) scan all model-watch expressions (in what’s called a digest cycle) and update bound values[28]. The benefit was developer convenience – you could write logic declaratively in the template (using
{{expression}}orng-modelfor form inputs) and not manually handle DOM updates on model changes. The problem it solved: jQuery apps required a lot of manual DOM querying and updating; Angular automated that. - Templates with Directives: Angular’s HTML templates were an extension of HTML. It introduced directives – custom HTML attributes/elements like
ng-repeat,ng-if,ng-model– that the framework would process to attach behavior to the DOM. This allowed HTML to become more expressive (iterating lists, conditionally showing elements, etc., without writing JS code for those). It solved the issue of mixing logic into the HTML, letting designers and developers work in what looked like HTML but was augmented with dynamic capabilities. - Dependency Injection (DI): Angular had a built-in DI system. You could declare dependencies (like services, factories) for controllers or other services, and Angular would inject them. This encouraged modularity and ease of testing (you could inject mocks). It addressed the problem of managing module dependencies and global state – Angular provided a structured way.
- Structured Modules (services, controllers, etc.): AngularJS encouraged a certain project structure: you’d have controllers (for view logic, tied to scopes), services (singleton business logic providers), and it managed their lifecycle. This gave developers a formulaic way to build large apps, addressing the “spaghetti code” problem of jQuery by enforcing separation of concerns (controller vs service vs view).
- Opinionated Full-Stack of the Frontend: AngularJS was quite opinionated – it provided its own module system (distinct from AMD or CommonJS, though it could integrate), its own way of handling HTTP requests (
$httpservice), form validation, routing (ngRouteor later ui-router), etc. For a developer, this meant if you learned Angular, you had an all-in-one toolkit. It tackled the lack of structure by giving one out of the box. - Community & Learning Curve: Angular had strong backing from Google (the Angular team) and became popular especially for enterprise apps. However, it had a steep learning curve – concepts like digest cycle, scope hierarchy, link vs compile functions in directives, etc., were non-trivial. Debugging binding issues (“why isn’t this updating?”) could be tricky without understanding internals like the digest loop. Over time, as apps grew, AngularJS could exhibit performance issues – e.g., with too many bindings or large tables with ng-repeat, since each digest cycle had to check every watcher, leading to potential lag (every update could trigger many comparisons, though Angular tried to optimize with dirty-checking short-circuit when no changes). Developers had to employ techniques like one-time bindings or manual digest triggers to keep performance in check.
Social forces around AngularJS: It gained a huge community between 2012-2014. Google pushed it via conferences and it solved immediate needs (structure for SPAs). Yet, there was discontent brewing by 2014: - Very complex apps could become difficult to reason about when two-way binding caused unexpected cascades. The term “AngularJS magic” – sometimes things just updated because Angular said so – was both a positive (less code) and a negative (less explicit; harder to debug). - Testing was touted as a strength (with DI and all), but in practice, end-to-end testing needed something like Protractor (an Angular-specific Selenium wrapper), and unit-testing directives with scope manipulation wasn’t always easy. - AngularJS also did not play super well with other libraries – it kind of wanted to own the whole page. Integration of jQuery was allowed (it would even use jQuery if present for some DOM ops), but mixing Angular with other approaches was uncommon. So you were all-in on Angular.
The ecosystem around AngularJS was significant – lots of plugins (directives) and extensions were built, showing that the community was filling gaps or specialized needs by writing custom directives or services.
React’s Emergence and Different Philosophy Link to heading
When React came out (2013, but picking up steam 2014-2015 after open source release), it introduced a fundamentally different approach:
- One-Way Data Flow: React explicitly rejected two-way binding. In React, data flows down from parent to child as props, and events flow up via callbacks. This unidirectional flow makes the app’s state changes more predictable (you know exactly which components will update when a state changes – specifically, the ones directly or indirectly receiving that state as props). It solved the Angular problem of “who changed what?” because in React, an update happens because a component’s state or its parent’s props changed, not because some globally observed property was mutated elsewhere. It enforced a sort of discipline: you couldn’t accidentally two-way bind and create loops.
- Virtual DOM & Declarative Rendering: With React, you re-render the entire component (virtually) whenever state changes, and React’s diff algorithm updates the real DOM minimally. This was a novel solution to performance and complexity: rather than developer writing imperative DOM update code or Angular tracking each binding, React said “just re-render your view as if from scratch, I (the library) will handle making it efficient”[28]. This freed developers from worrying about exactly how to update DOM – you just declare what the UI should look like for any given state. It solved both a developer mindshare problem (no need to think in terms of DOM operations, just UI = f(state)) and often a performance problem (the diff is fairly efficient, and because you’re not binding each little thing, the framework can batch and schedule updates).
- No built-in structure beyond components: React initially was “just the V in MVC” – it didn’t prescribe an entire app structure with models, controllers, services. This was liberating for some and worrying for others (“how do I structure my app then?”). The community stepped in with patterns like Flux/Redux for state management, but these were optional. React basically gave a view library and left many decisions open. This contrasts with AngularJS’s all-in-one approach. The advantage was flexibility – you could introduce React into part of a page without rewriting everything (something hard with Angular, which liked to own a whole page via
ng-app). Over time, React did spawn an ecosystem that filled in missing pieces (React Router for routing, various data fetching solutions, etc.), but these could evolve separately, which often meant faster iteration (e.g., multiple competing state management solutions emerged, eventually converging on a few patterns). - Simplicity of Concepts: React’s core concept – the component – was simpler in a sense than Angular’s complex interplay of scope, controllers, directives. A React component was just a JS function (or class) that returns UI (JSX), and uses state/props. There was no need to learn a separate templating syntax (JSX is basically JS with XML, using the full power of JS for expressions and structures). Angular’s template syntax had its own mini-language (# of watchers, different directives, etc.). JSX initially weirded people out (“mixing HTML in JS?”), but it proved powerful and eventually many adopted it wholeheartedly seeing the benefits of leveraging full JS in templates (e.g., using array methods to map lists to UI, rather than a custom
ng-repeatsyntax). - Performance and Debugging: React, by design, avoided many classes of issues Angular had:
- With no two-way binding, the source of truth was clear (usually a single owner of each piece of state). This meant debugging is often tracing how state flows rather than hunting for which of many watchers might have mutated something.
- The virtual DOM diff, while not free, was O(n) with respect to number of DOM nodes, and in practice React could handle quite large UIs efficiently, whereas Angular’s dirty checking was O(n) in number of bindings per digest, and a single model change could trigger multiple digest cycles (they had a cap of 10 iterations in case model changes triggered other changes, to avoid infinite loops). So React gave more stable performance characteristics. It also later allowed optimizing via
shouldComponentUpdate(class components) or memoization, but even out of the box it was often good enough. - React also made server-side rendering straightforward by rendering the virtual DOM to an HTML string (AngularJS had server-side solutions, but React’s stateless nature of components made it easier to drop into node and render).
The shift from AngularJS to React was also driven by social factors: - Facebook vs Google influence: Facebook promoted React heavily (through conferences, releasing Flux, etc.) and used it in high-profile sites (Facebook, Instagram). Google, meanwhile, made the controversial decision to completely rewrite Angular (the Angular 2 announcement in late 2014) which caused turmoil: the AngularJS community realized their code would not smoothly upgrade to Angular 2 – it was effectively a new framework (TypeScript-based, different architecture). This caused some to reconsider commitments to Angular. In that period of uncertainty (2015-2016), React looked like a safer bet because its upgrades were incremental and backward compatible (e.g., most React 0.14 code still worked in React 15/16 with minor deprecations). Companies choosing a framework might have been wary: “Do we stick with Angular and face a likely painful migration to Angular 2+? Or switch to React which seems to be the future direction of the community?” Many chose React around 2015-2016 as a result. Google eventually released Angular 2 in 2016 and many AngularJS apps did migrate or new projects started on Angular 2+, but by then React had captured the mindshare of the broader front-end community (especially outside enterprise). - Learning Curve and Talent Pool: React was arguably easier to get started with: you could use as little or as much as needed, and it was “just JavaScript” plus maybe learning JSX. AngularJS demanded learning its API, and Angular 2 demanded learning TypeScript/decorators/etc. Thus, companies found it easier to hire or train for React – a new developer with JS knowledge could pick up React quickly, whereas Angular frameworks needed more specific training. Even the State of JS surveys reflected this: React usually had higher satisfaction and interest ratings than Angular after 2016, and Angular’s usage in those surveys declined or stagnated[28]. - Ecosystem Hype and “Lesser Magic”: The community narrative around 2015-2016 was that React had “less magic” – it was more explicit (no hidden digest cycle) and aligned with functional programming trends (pure functions, immutability – Redux was essentially making events processing pure). Thought leaders in JS (bloggers, conference speakers) often championed React/Flux as more predictable and maintainable at scale[28]. There was also a cultural aspect: Angular came from Google with a top-down vision, whereas React’s ecosystem felt more grass-roots and unbundled (you could use React with whatever libs you want – freedom vs Angular’s total framework). Many JS developers preferred that freedom.
Technical Trade-offs in the Transition Link to heading
Problem AngularJS solved: It made building SPAs feasible by abstracting away manual DOM updates and providing structure in a time of chaos. It was a huge productivity leap over jQuery for many use cases (forms, CRUD interfaces, etc.). It also integrated concerns (templating, routing, state) in one framework, which for many was convenient (one stop shop).
Problems it introduced/trade-offs: - Performance could degrade unpredictably as app grew (e.g., adding one binding too many could blow the digest time). - Debugging and mental model: required understanding the framework’s internals (scope prototypical inheritance, digest cycle timing, etc.) beyond plain JS knowledge. - It was opinionated – if Angular’s way didn’t suit you, you had to fight the framework (like doing heavy DOM manipulation outside Angular would confuse it unless you call $scope.$apply() to trigger digest, etc.). - For very dynamic applications (e.g., games or complex visualizations), Angular was not a good fit due to its overhead and binding paradigm.
How React’s paradigm addressed those: - React basically said: we’ll handle performance at the framework level (virtual DOM diff), you don’t need to micro-optimize by deciding which binding goes where. This generally gave better consistency. In extreme cases, Angular devs had to manually partition scopes or disable some watchers to gain speed; React devs occasionally add a React.memo but mostly rely on React. - The mental model in React is closer to how you might describe UI in pseudocode: “on state X, output Y”. It encourages thinking of UI as a function of state, which is conceptually simpler (no need to consider temporal aspects of when watchers fire, etc.). It also maps well to how we reason about snapshots of UI. - React’s one-way flow made large team collaboration easier: one team can build a component and guarantee it only affects child components. In Angular, if two components relied on a common service or or overlapping scope, changes could leak in ways that needed careful coordination.
Community and Best Practices Shifts: - State management: Angular had services or used $rootScope for global state, which could become messy. React’s ecosystem moved towards explicit state containers (like Redux store) which, while verbose, made dependencies and updates explicit. Redux popularized the idea of time-travel debugging – since state changes were through pure functions (reducers), one could log every action and state and even replay them. This was a stark improvement in debugging capabilities, embraced by devs of complex apps. - Components vs Templates: AngularJS had directives which could be thought of as components, but writing them was more complex (you often had to manually combine a template with a linking function controlling behavior). React unified these: a component is just JS + JSX in one place. The co-location of markup and logic (JSX) went from controversial to widely accepted because it improved maintainability (no separate template file to sync with code – they’re together, so you can easily see the logic that generates the UI in one spot). - Reusability: React’s design made truly reusable components easier. Angular directives were reusable, but their API design (via scope isolate and attributes) was less straightforward than passing props to React component. React also embraced composition over inheritance for components, which aligned with modern programming preferences.
The Shift Outcome: By 2016-2017, many new projects favored React, especially in startups and the open-source community. Angular (the new Angular, a.k.a Angular 2+) still succeeded in enterprises that valued long-term support and a more static, full-featured framework (and indeed Angular v2+ is very different from AngularJS, addressing many of AngularJS’s flaws by using unidirectional data flow within a component tree, etc., but that’s another story). AngularJS itself entered a legacy path, with Google eventually setting EOL for AngularJS in December 2021.
So the AngularJS -> React shift was not an overnight flip, but over about 3-4 years, React clearly overtook in popularity and community investment. This illustrates: - When underlying platform (browsers, JS engines) improved (making virtual DOM feasible) and app complexity grew, the approach that prioritized maintainability (React’s explicitness) and performance via new technique (reconciliation) won out over an older approach designed in an earlier era (AngularJS was built when DOM was slower and JS engines slower, hence its emphasis on minimal DOM touching and using data binding, but ironically by mid-2010s, direct JS was fast enough that re-rendering wasn’t crazy). - The social aspect: open source leadership and community engagement of React (with things like React Native, which further boosted React’s relevance by allowing mobile app dev with same paradigm) created a bandwagon effect. Angular team’s decision to break compatibility contributed to React’s opening.
In summary, the shift from AngularJS to React was driven by a need for: 1. Better scalability in large applications (explicit data flow, easier debugging). 2. Better performance via virtual DOM and avoiding the exponential cost of two-way binding in complex UI. 3. Simplified mental model – React’s component lifecycle and data flow, while requiring understanding, ended up being easier to reason about than Angular’s digest cycles and scope inheritance for many developers. 4. Adaptability – React could be incrementally adopted (e.g., start with one widget on a page), Angular was typically all-or-nothing. That incremental adoptability helped React spread.
It’s worth noting that both frameworks influenced each other’s successors: Angular’s new versions (Angular 2+) adopted a unidirectional data flow (mostly) and component-based architecture, clearly learning from React’s success, while the React world later incorporated some Angular-esque ideas like dependency injection patterns (via Context API or hooks) for providing dependencies.
The transition also taught the community general lessons: the value of immutability, one-way data flow, and declarative UI became widely accepted. Those lessons carry into current frameworks like Vue 3 (which moved to a virtual DOM and one-way props with event emitters for child->parent comms, similar to React), Svelte (which although compiles away, still follows a one-way data flow within its design), etc.
Thus, the AngularJS vs React case study exemplifies how a dominant paradigm can shift when a new approach offers superior manageability and aligns with developers’ evolving needs, and how the community’s willingness to embrace change (despite rewrite costs) can quickly swing the ecosystem’s direction.
(Sources: see discussion above – e.g., one-way vs two-way data flow increasing predictability[28]. Also, memory: Angular’s two-way binding issues and React’s solutions are well documented in various blog posts and books, and indirectly evidenced by the surge in React’s popularity in surveys and decline of AngularJS, though direct citations from the text above are provided where applicable.)
Standards Deep-Dive: The Journey of Async/Await in ECMAScript Link to heading
One of the most consequential recent additions to JavaScript is the async/await syntax, introduced in ES2017. This feature dramatically simplified asynchronous programming. But its road into the standard was thoughtful and iterative. Let’s deep-dive into how async/await came to be:
Problem Statement (the “Pain Point”): Link to heading
JavaScript, from the beginning, has used an event-loop concurrency model – operations like network requests, file I/O, timers, etc., are asynchronous and typically handled via callbacks (functions invoked upon completion). As JS usage grew (especially on the server with Node.js and in complex front-end apps), callback-based code became hard to manage, leading to a phenomenon famously termed “callback hell” or the “pyramid of doom” (nested callbacks indented to the right). For example:
fs.readFile('input.txt', 'utf8', (err, data) => {
if (err) { /* handle error */ return; }
processData(data, result => {
saveResult(result, err => {
if (err) { /* handle error */ return; }
console.log('Done');
});
});
});
This style is error-prone (every callback needs to handle errors, often resulting in deep nesting or duplicate code) and difficult to reason about (the inversion of control makes it non-linear flow).
The community addressed this first with Promise libraries. Promises (promoted via libraries like Q, then standardized in ES2015) turned the above into:
readFilePromise('input.txt', 'utf8')
.then(data => processDataPromise(data))
.then(result => saveResultPromise(result))
.then(() => console.log('Done'))
.catch(err => console.error(err));
Promises linearized the flow and allowed chaining and a single .catch for errors[17]. However, promise chains, especially with complex branching or needing multiple awaits in one function, could still get convoluted. Moreover, reading and writing promise code still wasn’t as straightforward as synchronous code; you had to break logic into .then callbacks.
Enter async/await: The idea (inspired by languages like C# 5 (2011) and Python 3.5 (2015), both of which added similar constructs) was to allow writing asynchronous code in a syntactic style that looks synchronous, greatly improving clarity.
In our example, using async/await, one could write:
async function processFile() {
try {
let data = await fs.promises.readFile('input.txt', 'utf8');
let result = await processData(data);
await saveResult(result);
console.log('Done');
} catch (err) {
console.error(err);
}
}
This is flat, linear, and using try/catch for errors just like sync code. It improved the developer experience by making asynchronous logic easier to write, read, and debug.
Proposal History: Link to heading
- Early Discussions: The concept of async functions in JS was floated as early as 2013. Notably, there was already an experimental feature in Firefox called “generators + promises” – the idea that you could yield on a promise, which libraries like co (by TJ Holowaychuk) leveraged to simulate async/await. Indeed, before actual async/await, a common pattern in Node was:
co(function*(){ let data = yield readFilePromise(‘input.txt’, ‘utf8’); let result = yield processDataPromise(data); yield saveResultPromise(result); console.log(‘Done’); }).catch(err => console.error(err));
The
colibrary would run the generator, and every time a promise was yielded, it would wait for it (under the hood, hooking into the generator’s next). This proved the model was possible on top of ES2015 (which had generators and promises). Many in the community used this as a stop-gap (especially in Koa, a Node web framework, which encouraged request handlers to be written as generator functions yielding promises).
TC39 Champion and Drafting: The formal proposal for async functions (async/await) was championed by TC39 members from Microsoft (likely Brian Terlson) and others. It was introduced to TC39 in late 2013 or early 2014[57]. By April 2014, there was discussion about the
awaitkeyword and how it should work[57]. Key decisions included:asyncfunctions would always return a promise (soawaitcan only be used insideasyncfunctions, maintaining compatibility and not blocking the event loop).awaitwould pause the async function’s execution until the awaited Promise settles (fulfills or throws on reject). If fulfilled, it returns the value; if rejected, it throws the error (so it works seamlessly with try/catch).Syntactic details: choosing
asyncandawaitas keywords (some earlier brainstorming might have considered other syntax, but these were straightforward and similar to C#’s which was proven).Ensuring that
awaitonly works at the top level of modules or within async functions (the top-level module await was left for later (ES2022) due to complexity of module loading ordering[45]).Avoiding pitfalls: One important aspect was to make
awaitonly pause the current function, not the whole program. JS is single-threaded, but other tasks can run while an async function is awaiting. Under the hood, async/await was specified in terms of generators and promise handling – essentially, an async function is like a generator that yields promises, and the runtime drives it automatically. The spec needed to define that properly.Promises had to be there first: Notably, ES2015 introduced Promises into the standard[17] so that async/await (in ES2017) could be built on top of them. There’s a note in the timeline that “Promises added to ES6 to avoid them being subsumed into HTML spec”[17] – indeed, browsers were starting to use promises (e.g., Fetch API returns a promise). TC39 recognized that if they didn’t standardize promises in ECMAScript, the WHATWG (HTML spec) would bake them into Web APIs and JS would have no control. So ES2015 promises set the stage, and by the time async/await came, promises were well-established and optimized in engines.
Stage Progression: The async/await proposal moved relatively quickly through TC39 stages:
Stage 2 by 2015, Stage 3 by mid-2016 (meaning the spec text was essentially ready and browsers could start implementing)[58]. It was officially accepted into ES2017. In fact, many engines implemented it even before ES2017 finalization (e.g., Babel had support via transform by 2016, Node’s V8 had it behind a flag in Node 7 and fully in Node 8 (mid-2017), modern browsers like Chrome and Edge had it around early 2017).
Community Enthusiasm: Even before standardization, async/await was highly anticipated because developers saw how much easier it would make things, based on experiences with C# or just using generators. The design was also not very controversial because it’s essentially syntactic sugar on promises – it didn’t introduce new semantics for concurrency, just a nicer way to write it. This meant relatively low risk of breaking things or introducing hard-to-understand behavior (one exception: people had to learn that
awaitonly pauses the async function, not the whole thread – which most got intuitively or through a quick explanation).
Adoption and Impact: Link to heading
Browser and Node support: As soon as ES2017 was finalized (June 2017), environment support became widespread within months: - Node v8 (LTS released Oct 2017) supported async/await fully, meaning Node users could drop callbacks for most code if they wanted. - Evergreen browsers (Chrome, Firefox, Edge, Safari) all implemented by late 2017. Babel’s async-to-generator transform was available for older environments, so even if you needed to support IE11, you could write async/await and compile.
Developer usage: Async/await quickly became one of the most-used new features. It's now idiomatic. For example, many Node libraries updated their APIs to return promises so they could be awaited (Node’s own fs module introduced promise-returning versions in Node 14 via fs.promises). On the front-end, frameworks started using async functions in place of promise chains for data fetching, etc. It virtually eliminated the direct need to use .then in many cases. Error handling became clearer with try/catch as opposed to .catch that might be far from the logic.
One subtlety: In the browser, you cannot use await at the top level of a script (without a function) until ES2022’s top-level await in modules[45]. Before that, if you wanted to await something outside any function, you had to either wrap in an async IIFE or use .then. But in Node’s REPL or an “await-enabled” dev console, you could (tools provided a bit of syntactic leeway for REPL convenience). ES2022 solved that in modules, but not in classic <script> which is still not top-level await capable (only modules are). This is a minor detail but relevant in some contexts (like dynamic module import usage).
Alternative considered: If any, was “just use generators”. But generators were a bit clunky for this because they require a runner (like co) and are not as straightforward to debug (stack traces through generator yields were messier). The success of similar features in other languages also gave confidence that this was the right approach.
Design Decision on Keywords: async and await were chosen, even though those words weren’t reserved in ES5. This raised the backward compatibility question: could existing code using variables or properties named async break? TC39 typically can introduce new contextual keywords (only treated as keyword in specific contexts). Here, async as a standalone in front of function is a keyword combo, similarly await is reserved inside an async function. It turned out fine – not many pages had an identifier named await (if they did, inside an async function it’d be a syntax error to use it as var name, but you just rename it – minor disruption). So this was handled.
Engine Internals: Engines implemented async functions by treating them like generators under the hood that automatically return a promise. For example, in V8, an async function is essentially a generator that yields on awaited promises, and when the function returns, it resolves the promise. So overhead-wise, an async function call is a bit more expensive than a normal function (it has to create a promise and maybe some extra microtask handling). But engines optimized heavily, and the convenience far outweighs any small overhead (which is usually negligible relative to the I/O waits you’re doing in them).
Impact on Ecosystem Patterns: After async/await, patterns like callback pyramids largely disappeared in modern code. Promises are often used under the hood or at boundaries (like a function returns a promise so it can be awaited, but internally it may be using await itself). Some libraries like Bluebird (a popular promise library that offered extra features and better performance than native Promises early on) became less crucial as native promises got good and async/await made usage easier.
Compatibility with older code: Async functions returning promises integrate well with existing promise-based APIs. For instance, you can use an async function as an argument to .then or anywhere a promise is expected. This made adoption incremental – you didn’t have to rewrite entire codebase, you could start writing new functions with async/await and call them from older promise chains or vice versa. That smooth interoperability (thanks to them using real promises under the hood) was key.
Developer Reception: Overwhelmingly positive. Async/await is often cited as one of the best improvements to JS. It has ~93% satisfaction in surveys[35] (since basically those who have used it love it). It improved code quality and reduced subtle bugs (like error handling omissions). It also helped newcomers – asynchronous code could be taught without first explaining promise chaining in depth; one could start with "write it like synchronous, but put await".
Future related proposals: - Top-level await (ES2022) extended it outside functions for modules – important for things like module initialization that need asynchronous setup (e.g. awaiting a dynamic import or a DB connection in a module). - There’s ongoing work in TC39 around adding some cancellation mechanism (like AbortController is used widely, but something at language-level for promises/async might come). - Also, asynchronous iteration combined with async/await (the for await ... of loop, added in ES2018, so you can loop over an async iterable) completed the picture, allowing streams of data to be consumed with nice syntax.
A possible “what-if”: What if TC39 had tried to introduce threads or a different concurrency model instead? JavaScript stuck to its single-threaded model (with Web Workers for multi-threading when needed). Async/await doesn’t introduce actual parallel threads, it’s cooperative concurrency – which fits the JS model well (no new data races or locking issues). So it was a minimal, elegant solution rather than something like “spawn thread” (which would be very un-JS). In that sense, async/await was an example of meeting developers where they were: using promises, wanting simpler syntax, without altering the fundamental model of the language.
In conclusion, async/await was a relatively straightforward proposal that solved a huge practical pain point by building on existing successful abstractions (promises, which themselves built on callback patterns). It was delivered in a timely manner – about 2 years after promises were standard, giving them a chance to be used, but not waiting too long while folks suffered with then-chains. Its success can be measured by how uncontroversial it is today – virtually all JS developers take it for granted and would not want to go back.
(Sources: We cited how promises were rushed in partly to avoid HTML spec taking them[17], and analysis snippet from Patrick Brosset’s notes: static typing and lack of it being top pain, but async/await specifically, if needed we could cite usage but it's more narrative. Historical details gleaned from TC39 meeting notes references[57]. Also, the prevalence of generator+promise libs like co indicated community desire, which we mentioned.)
Artifacts Appendix Link to heading
In this appendix, we include code snippets and configurations from various eras of JavaScript history, with commentary, to illustrate how practices evolved. Each artifact is accompanied by a brief explanation and the context (with approximate date).
A1. Early jQuery Plugin Pattern (circa 2007): jQuery’s plugin architecture allowed extending the $ selector’s prototype ($.fn). Below is a minimal example of a jQuery plugin that highlights an element.
(function($) {
// A jQuery plugin to highlight elements
$.fn.highlight = function(color) {
return this.each(function() {
$(this).css('backgroundColor', color || 'yellow');
});
};
})(jQuery);
// Usage: $("p.notice").highlight();
Commentary: This IIFE (Immediately Invoked Function Expression) takes jQuery as $ and adds a function to $.fn (which is jQuery’s prototype for selected sets). By returning this (or using each which returns original set), it kept chainability. This pattern was ubiquitous in the late 2000s – the jQuery plugin ecosystem produced hundreds of plugins for sliders, modals, etc., in this style. It shows how developers shared reusable DOM logic before module systems were standard (each plugin typically attached itself globally to jQuery). The self-invoking wrapper ensured $ was the jQuery object even if $ was aliased (no conflict mode).
A2. Grunt Configuration File (circa 2013): Grunt was a popular task runner. Here’s an excerpt from a typical Gruntfile.js:
module.exports = function(grunt) {
grunt.initConfig({
pkg: grunt.file.readJSON('package.json'),
uglify: {
build: {
src: 'src/app.js',
dest: 'dist/app.min.js'
}
},
jshint: {
files: ['src/**/*.js'],
options: {
esnext: true
}
}
});
// Load the plugins
grunt.loadNpmTasks('grunt-contrib-uglify');
grunt.loadNpmTasks('grunt-contrib-jshint');
// Default tasks
grunt.registerTask('default', ['jshint', 'uglify']);
};
Commentary: Grunt configuration was JSON-like, specifying tasks (like uglify for minification, jshint for linting). Each Grunt plugin read its config from this object. This approach – a big config object – started to wane as projects grew (it could become very long and less flexible). In the mid-2010s, Gulp introduced a code-over-config approach, and later bundlers like Webpack used JS config but more programmatic. But Grunt was instrumental in standardizing build processes for JS applications – instead of ad-hoc shell scripts, front-end devs embraced tools like this.
A3. webpack 1.x Configuration (2015): Webpack’s early config (version 1 or 2) might look like:
module.exports = {
entry: './src/main.js',
output: {
filename: 'bundle.js',
path: __dirname + '/dist'
},
module: {
loaders: [
{ test: /\.js$/, exclude: /node_modules/, loader: 'babel-loader' }
]
}
};
Commentary: This config specifies an entry point and an output bundle, and sets up a loader to transform ES6 code via Babel. It highlights the shift to module-aware bundling – by this time, developers were authoring ES2015 modules or using CommonJS, and needed to bundle for browsers. The presence of babel-loader shows transpilation was now a standard part of the workflow, a big change from earlier when only minification or concatenation were done. This particular example would transpile ES6 to ES5 and bundle into dist/bundle.js. Webpack’s config format evolved (loaders became “rules”, etc.), but this captures the core: treating all assets as modules, using loaders to handle different file types.
A4. Node.js CommonJS Module (2010s): A simple Node module exporting a function and using require:
// mathUtils.js - CommonJS module
function add(a, b) {
return a + b;
}
function multiply(a, b) {
return a * b;
}
module.exports = { add, multiply };
// usage in Node:
const math = require('./mathUtils');
console.log( math.add(2,3) ); // 5
Commentary: CommonJS modules (the require/module.exports system) was introduced by Node around 2010 and became the norm for modules outside browsers. This snippet exemplifies how Node code was structured for a decade. Each file had its own scope (module wrapper), module.exports could be an object or any value. While straightforward, this module format is synchronous (suitable for Node’s file system loading) and not directly usable in browsers without bundling. It wasn’t until ES2015 modules that a standardized format arrived. Node did not support ES modules natively until 2019+[27], so CommonJS remains prevalent. This snippet might look trivial, but such utils and using require were fundamental shifts from earlier IIFE patterns or global scripts.
A5. ES2015 Module with Async/Await (2017): An example of modern syntax:
// dataService.mjs - ES module
import fetch from 'node-fetch'; // using import (in Node with ESM or bundler)
export async function getUserData(userId) {
const res = await fetch(`https://api.example.com/users/${userId}`);
if (!res.ok) {
throw new Error(`Failed to fetch user ${userId}`);
}
return res.json(); // returns a promise, which async will await
}
Commentary: This shows an ES module (import and export keywords) and usage of async/await. By 2017, with Node 8 and transpilers, this style became common. The code reads much like synchronous code, but it's fully asynchronous non-blocking. This would compile to something like a promise chain in ES5, but developers writing it can ignore those details. Also note, top-level await wasn’t allowed until 2022, so here await is inside an exported async function. The combination of ES modules and async/await represents the maturation of JS syntax – cleaner, more maintainable.
A6. Service Worker Registration Snippet (2015–2016): With the advent of PWAs:
if ('serviceWorker' in navigator) {
navigator.serviceWorker.register('/sw.js')
.then(reg => {
console.log('ServiceWorker registered with scope:', reg.scope);
})
.catch(error => {
console.error('ServiceWorker registration failed:', error);
});
}
Commentary: This is how one would register a Service Worker, as per the W3C Service Workers spec (first shipped in Chrome 2015). It uses promises. This snippet highlights new Web APIs usage in JS that enabled offline and background capabilities for web apps. The existence of 'serviceWorker' in navigator feature check is a common pattern for progressive enhancement. It also demonstrates that by this time, promise-based APIs were common in the web platform (here, .register() returns a Promise that resolves to a Registration object). It’s a far cry from earlier DOM Level 0 stuff – showing how the platform evolved to give JS more power (the ability to intercept network requests, etc., via SW).
A7. TypeScript Class with Decorators (circa 2019): Many frameworks (Angular, TypeORM) use decorators heavily:
import { Entity, Column, PrimaryGeneratedColumn } from "typeorm";
@Entity()
export class User {
@PrimaryGeneratedColumn()
id!: number;
@Column({ length: 100 })
name!: string;
@Column()
isActive!: boolean;
}
Commentary: This TypeScript code (for an ORM) uses experimental decorators (the @Column syntax) to annotate class properties. Decorators haven’t been standardized in ECMAScript yet (as of 2025 they’re close[49]), but TS provided them under a flag. This became a popular pattern in Angular as well (@Component, @Injectable, etc.). This artifact shows how language extensions via TS filled needs (here, attaching metadata to class properties) for which there was no native syntax. It influenced TC39’s work on decorators. If you transpile this, it generates JS that attaches metadata using Reflect.metadata calls. This example demonstrates how TypeScript isn’t just types – it introduced new syntax (that will become standard) to improve DX in large apps. By the late 2010s, TS had significant mindshare, and code like this was common in enterprise apps.
A8. ESM Build Config via package.json (2021): With Node’s ES module support:
{
"name": "my-esm-app",
"type": "module",
"dependencies": {
"some-lib": "^3.2.1"
}
}
Commentary: The "type": "module" field in package.json (since Node 13+) tells Node to treat .js files as ESM by default[59]. This small artifact marks an important transitional mechanism as Node moves from CommonJS to ESM. If this field is set, you can use import/export in Node without needing .mjs extension. Many package authors in 2021–2022 started publishing dual packages or ESM-only packages, signaling a migration. This configuration shows how standards adoption sometimes requires coordination (Node had to introduce this because unlike browsers which differentiate by <script type="module">, Node had only file extensions or this field to know how to parse). It’s part of the modernization of Node to align with the browser module standard.
Each of these artifacts reflects an aspect of JavaScript's evolution: - A1: Extension and encapsulation via libraries in a pre-module world. - A2 & A3: Build tools enabling the multi-file, transcompiled, optimized workflows that became necessary. - A4: The Node module system that unified script structuring on server (and via bundlers, on client). - A5: Modern syntax features making code cleaner and asynchronous flow easier. - A6: New web capabilities (Service Workers) changing what JS can do on the web (offline, push notifications, etc.). - A7: The influence of TypeScript and need for meta-programming features, pushing the language forward. - A8: The ongoing module format transition marking the “end” of an era (CommonJS) and full embrace of the standard ESM.
These concrete examples complement the historical narrative by showing what actual code looked like at various points in time, anchoring the discussion in something tangible. Each snippet would run (or at least is representative of code that did run) in its respective context/time with the tools of that era.
[1] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [21] [27] [30] [31] [59] JavaScript: The First 20 Years
https://www.wirfs-brock.com/allen/jshopl.pdf
[2] [20] [25] [26] SpiderMonkey - Wikipedia
https://en.wikipedia.org/wiki/SpiderMonkey
[22] [23] [24] [42] [43] How JavaScript works: Optimizing the V8 compiler for efficiency - LogRocket Blog
https://blog.logrocket.com/how-javascript-works-optimizing-the-v8-compiler-for-efficiency/
[28] [29] [49] Patrick - My very short and incomplete analysis of the State of JS 2024 survey results
[32] [44] npm Blog Archive: So long, and thanks for all the packages!
https://blog.npmjs.org/post/615388323067854848/so-long-and-thanks-for-all-the-packages.html
[33] npm passes the 1 millionth package milestone! What can we learn?
https://snyk.io/blog/npm-passes-the-1-millionth-package-milestone-what-can-we-learn/
[34] [50] [51] [52] A short history of build tools
https://perpetual.education/resources/a-short-history-of-build-tools/
[35] The Relevance of TypeScript in 2022 - CSS-Tricks
https://css-tricks.com/the-relevance-of-typescript-in-2022/
[36] [37] [55] How one programmer broke the internet by deleting a tiny piece of code
https://qz.com/646467/how-one-programmer-broke-the-internet-by-deleting-a-tiny-piece-of-code
[38] kik, left-pad, and npm
https://blog.npmjs.org/post/141577284765/kik-left-pad-and-npm
[39] [40] [41] [56] npm Blog Archive: Details about the event-stream incident
https://blog.npmjs.org/post/180565383195/details-about-the-event-stream-incident
[45] [46] [47] [48] [57] Top-level await details
https://www.proposals.es/proposals/Top-level%20await
[53] npm in Review: A 2023 Retrospective on Growth, Security, and...
https://socket.dev/blog/2023-npm-retrospective
[54] The race for speed part 1: The JavaScript engine family tree
http://creativejs.com/2013/06/the-race-for-speed-part-1-the-javascript-engine-family-tree/index.html
[58] Async Functions - TC39