<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Cnn on Marginalia</title><link>https://sguzman.github.io/marginalia/tags/cnn/</link><description>Recent content in Cnn on Marginalia</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Thu, 12 Feb 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://sguzman.github.io/marginalia/tags/cnn/index.xml" rel="self" type="application/rss+xml"/><item><title>History of Neural Networks in Computing (1940s--2026)</title><link>https://sguzman.github.io/marginalia/posts/history-of-neural-networks-in-computing/</link><pubDate>Thu, 12 Feb 2026 00:00:00 +0000</pubDate><guid>https://sguzman.github.io/marginalia/posts/history-of-neural-networks-in-computing/</guid><description>An annotated history of artificial neural networks in computing from the 1940s through 2026. It traces the ideas that shaped the field - neurons as logical units, perceptrons, gradient-based learning, convolution, recurrent models, and attention - and shows how they cycled in and out of favor as compute, data, and evaluation practices changed. Along the way it highlights the social and institutional context (cybernetics, connectionism, the AI winters, and the deep-learning boom), and explains why certain results became decisive turning points. The goal is to give a coherent timeline plus enough mathematical and engineering intuition to understand what each wave added, what it failed to solve, and how later work reframed earlier limitations.</description></item></channel></rss>