Python has come a long way since its first official release in 1991. Today, in 2026, Python 3 has become a ubiquitous tool and one of the most widely used programming languages. Throughout the years, many features we now take for granted have been introduced, along with many others you might not even know about.
In this complete historical walkthrough of Python 3 evolution, we will go through each release from 3.0 up to the highly anticipated Python 3.15 release of 2026, highlighting the major features and discovering many unknown changes along the way.
Python 3.0 — A Clean Slate for the Future
Official Release Notes
By the late 2000s Python 2 had become hugely successful, but some early design choices no longer made sense. Arriving on December 3, 2008, Python 3 boldly broke backward compatibility to rid Python of its legacy quirks and set a new standard for the language.
Fixing The language biggest quirks
Believe it or not, in the old days of Python 2, print was a statement, not a function. This made it impossible to use in lambdas or pass as an argument. By making it a function, Python 3 made the most common operation in the language consistent with everything else.
Then, there was the issue of strings. In Python 2, there was a confusing split between str (which were just bytes) and a separate unicode type. Encoding was a game of Russian Roulette.
A string might be UTF-8, or it might be Latin-1. You’d only find out when your program crashed and debugging this wasn’t always easy.
Realizing this was not very practical, Python 3 made text strictly str (Unicode) and binary data strictly bytes.
Before
data = "café"
# is utf-8? is it ISO-2022-JP? Only god knows until you run the codeAfter
text = "café" # str: always Unicode text
data = text.encode() # bytes: always explicit binary dataStreamlining sequences
There used to be a time where range(1000000) would literally create a list with a million items.
You had to use xrange() if you wanted to be efficient as xrange produced items as they were consumed.
In Python 3, range() now acts as xrange() making the latter obsolete and Python more memory efficient.
Unpacking sequences also got easier with the star operator.
Grabbing the first element and the rest of a list used to require manual slicing.
Python 3.0 introduced the * operator in assignments, letting you unpack sequences naturally.
Before
seq = [1, 2, 3, 4, 5]
first = seq[0]
rest = seq[1:-1]
last = seq[-1]After
first, *rest, last = [1, 2, 3, 4, 5]
# first=1, rest=[2, 3, 4], last=5Dict and set comprehensions got simpler
List comprehensions existed in Python 2, but if you wanted to build a set or a dictionary in one expression, you had to pass a list comprehension into the constructor. Python 3.0 gave sets and dicts their own native comprehension syntax.
Before
squares = dict([(x, x**2) for x in range(5)])
unique = set([x for x in data if x > 0])After
squares = {x: x**2 for x in range(5)}
unique = {x for x in data if x > 0}Chaining Exceptions to Better Understand Errors
When an exception occurred inside an except block, the original traceback was silently lost. Python 3.0 introduced raise ... from ... to explicitly link errors, so when debugging you see the full causal chain instead of guessing what was swallowed.
Before
try:
do_database_thing()
except DBError as e:
raise AppError("App crashed")
# The original traceback of DBError is gone forever.After
try:
do_database_thing()
except DBError as e:
raise AppError("App crashed") from e
# Full traceback preserved.Reach through scopes with the nonlocal keyword
Closures in Python 2 could read variables from an enclosing scope, but couldn’t modify them. The common hack was to wrap the value in a mutable container like a list. nonlocal made this clean.
Before
def outer():
count = [0] # Mutable hack to modify from inner scope
def inner():
count[0] += 1
return count[0]After
def outer():
count = 0
def inner():
nonlocal count
count += 1
return countPython 3.1 — Maturing the New Standard
Official Release Notes
Released on June 27, 2009, this update proved that Python 3 was ready for serious engineering by introducing highly practical data structures and context management improvements.
Explicitly Enforce Order with OrderedDict
In the olden days, python dictionaries didn’t care about order. Printing them might give you {‘b’: 2, ‘a’: 1} or {‘a’: 1, ‘b’: 2} randomly.
Python 3.1 released OrderedDict, making it possible to ensure order was respected whenever necessary.
In CPython 3.6, insertion order was preserved as an implementation detail; it became an official language guarantee in Python 3.7.
A Built-in Counter Object
Counting how often each item appears in a list is one of the most common data tasks.
Before Counter, you had to write a manual tally loop every single time.
With Counter, not only is this built-in to Python but it also includes helpful methods:
from collections import Counter
counts = Counter(['apple', 'apple', 'pear'])
# Counter({'apple': 2, 'pear': 1})
counts.most_common(1) # [('apple', 2)]Flatter code with multiple context managers
If you needed to open two files at once, you had to nest with statements, creating an ever-growing indentation pyramid. Python 3.1 allowed multiple context managers on a single line. A simple and very much needed change.
Before
with open('source.txt') as src:
with open('dest.txt', 'w') as dst:
dst.write(src.read())After
with open('source.txt') as src, open('dest.txt', 'w') as dst:
dst.write(src.read())Python 3.2 — Equipping the Standard Library
Official Release Notes
Launched on February 20, 2011, Python 3.2 armed developers with production-ready modules for CLI building, advanced caching, and seamless concurrency.
Better CLIs with argparse
If you’ve built a Python CLI, you’ve probably used argparse. It was added to the standard library in Python 3.2. While optparse already handled traditional option parsing, argparse was added to support more complex CLI patterns such as positional arguments, subcommands, required options, and built-in validation.
import argparse
parser = argparse.ArgumentParser()
parser.add_argument("name")
parser.add_argument("--shout", action="store_true")Simpler concurrency with futures
Muilti-Threading was also simplified with Python 3.2 with the release of concurrent.futures. A handy built-in library for many of your concurrency needs.
Before
import threading
threads = []
for i in range(4):
t = threading.Thread(target=work, args=(i,))
threads.append(t)
t.start()As you can see, concurrency used to be verbose and thus more error-prone. concurrent.futures introduced a higher-level API that treats “tasks” as things that will return a value in the “future”, shielding you from low-level thread management.
After
from concurrent.futures import ThreadPoolExecutor
with ThreadPoolExecutor() as executor:
results = list(executor.map(work, [1, 2, 3, 4]))Built-in cache with lru_cache
Caching the result of an expensive function used to require writing your own dictionary wrapper. lru_cache turned that into a single decorator. This may be one of the most powerful decorators in the standard library. By adding a single line, you get an LRU (Least Recently Used) cache with automatic eviction, thread-safety, and even cache hit statistics via fetch_data.cache_info().
from functools import lru_cache
@lru_cache(maxsize=32)
def fetch_data(url):
return http_get(url)Stop running useless tests with conditional skips
Testing got smarter with decorators for skipping tests and marking expected failures. Before, you’d return early inside a test and the runner would mark it as “Passed” even though it never actually ran.
Before
def test_windows_registry(self):
if not sys.platform.startswith("win"):
return # Runner says "Passed". Misleading!After
@unittest.skipUnless(sys.platform.startswith("win"), "Requires Windows")
def test_windows_registry(self):
...Python 3.3 — A minor but much-needed release
Official Release Notes
Hitting the scene on September 29, 2012, this version added many much-needed built-in tools and paved the way for modern async with generator delegation.
Even More Built-in Tools
Unit testing is not everyone’s favorite part of software development, so any tool that makes it easier is appreciated.
The mock library was already massively popular as a third-party package.
Python 3.3 standardized it inside the standard library, giving every project instant access to mock objects without an extra dependency.
from unittest.mock import Mock
service = Mock()
service.hello.return_value = "Hello, world!"
print(service.hello()) # Hello, world!Virtual environments have changed software development forever. No longer can you say “but it works on my machine” with some (many) exceptions. For a long time people have used third parties like virtualenv to achieve this. In Python 3.3, venv was added to the standard library; dependency isolation is now a built-in part of the language’s workflow.
Finally, python added a tool for validating and manipulating IP addresses. Doing it yourself with regex was notoriously error-prone, a naive regex like \d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3} happily accepts 999.999.999.999. The ipaddress module made network parsing secure and object-oriented.
import ipaddress
ip = ipaddress.ip_address("192.168.1.10")
print(ip.is_private) # TrueFinally, when a C extension crashes, Python would so far just print Segmentation fault (core dumped) and die. No traceback, no clue where it happened. Now thanks to faulthandler, Python dumps a traceback at the exact moment of the crash, making debugging less of a headache.
Delegate your generator work with yield from
This was a massive win for readability. yield from allows a generator to delegate its work to another, which became a vital pattern for the early async implementations.
Before
def countdown(n):
for i in range(n, 0, -1):
yield i
def blastoff():
for i in countdown(3):
yield i
yield "🚀"After
def countdown(n):
yield from range(n, 0, -1)
def blastoff():
yield from countdown(3) # Elegant delegation
yield "🚀"Python 3.4 — Laying the Async Architecture
Official Release Notes
Released on March 16, 2014, this milestone update formally introduced the asyncio event loop, shifting asynchronous programming from a niche add-on to a core language philosophy.
Enter the event loop era with asyncio
Before asyncio, asynchronous Python often meant relying on third-party frameworks like Twisted or Gevent.
Those tools were powerful, but the programming model could feel fragmented and callback-heavy.
With Python 3.4, asyncio introduced a standard event loop into the standard library.
That was a major shift: asynchronous programming was no longer just a niche ecosystem pattern, but something Python itself officially supported.
At the time, though, Python did not yet have the modern async / await syntax (This will come in the next release). Early asyncio code used decorators and generator-based coroutines with yield from.
After
import asyncio
@asyncio.coroutine
def greet():
yield from asyncio.sleep(1)
print("Hello after one second")Replace your magic numbers with enums
Enums brought type safety and readability to constants. Instead of passing around magic integers or strings, you use a named, structured set of values that makes debugging much nicer.
Before
STATUS_PENDING = 1
STATUS_RUNNING = 2After
from enum import Enum
class Status(Enum):
PENDING = 1
RUNNING = 2Stop manipulating strings and embrace pathlib
os.path treated file paths as dumb strings. You joined them with os.path.join, checked existence with os.path.exists, and the code always looked clunky. pathlib treats paths as intelligent objects with methods, and uses the / operator to join them.
Before
import os
config_path = os.path.join(os.path.dirname(__file__), '..', 'config.json')
if not os.path.exists(config_path):
passAfter
from pathlib import Path
config_path = Path(__file__).parent.parent / 'config.json'
if not config_path.exists():
passStop reimplementing math and use the statistics module
Basic statistical operations like mean and median used to require either pulling in numpy or writing manual math. Python 3.4 gave us a lightweight standard module for the fundamentals.
Before
data = [1, 2, 4, 4, 5]
mean = sum(data) / len(data)
# Median? Sort the list, find the midpoint, handle even/odd lengths...After
import statistics
statistics.mean(data) # 3.2
statistics.median(data) # 4Hunt down memory leaks with tracemalloc
Memory leaks in Python are rare but brutal. When your process balloons to 4GB, you used to have no idea which line of code was responsible. tracemalloc maps memory blocks to the exact line of Python that created them.
import tracemalloc
tracemalloc.start()
# ... run code ...
snapshot = tracemalloc.take_snapshot()
top_stats = snapshot.statistics('lineno')
# Shows exactly which line allocated the most memory.Python 3.5 — The Dawn of Native Async and Type Hints
Official Release Notes
Unveiled on September 13, 2015, this release modernized Python’s syntax by introducing dedicated async/await keywords and laying the essential groundwork for static typing.
Write async code that actually looks like Python
This is the moment async Python started to feel like normal Python. The new keywords made asynchronous logic look and feel like standard synchronous code.
Before
import asyncio
@asyncio.coroutine
def fetch():
yield from asyncio.sleep(1)After
async def fetch():
await asyncio.sleep(1)Clean up your linear algebra with the @ operator
For the scientific community, this was a huge win. It turned deeply nested function calls back into readable linear algebra equations.
Before
import numpy as np
A = np.array([[1, 2], [3, 4]])
B = np.array([[5, 6], [7, 8]])
result = np.dot(A, B)After
import numpy as np
A = np.array([[1, 2], [3, 4]])
B = np.array([[5, 6], [7, 8]])
C = A @ B Note: Plain Python lists don’t implement the @ operator. It requires types like NumPy arrays that define the __matmul__ method.
Merge your collections with a splash of stars
What does it really mean for something to be “Pythonic”? This syntax update may be one of the best example of that. It’s the most concise way to merge lists and dictionaries without mutating original objects.
Before
a = [1, 2]
b = [3, 4]
combined = a + b + [5]
d1 = {"x": 1}
d2 = {"y": 2}
merged = d1.copy()
merged.update(d2)After
combined = [*a, *b, 5]
merged = {**d1, **d2}Bring order to the chaos with type hints
PEP 484 changed Python forever. While Python remains dynamically typed at runtime, the new typing module let you add static type hints that IDEs and tools like mypy can use to catch bugs before the code is even run.
Before
def process_user(user_data):
"""user_data must be a dict of string to integers."""
passAfter
from typing import Dict
def process_user(user_data: Dict[str, int]) -> None:
passStop wrestling with Popen and use subprocess.run
Popen is incredibly powerful but usually overkill for running a simple command. subprocess.run() provided a single, clean, blocking API to execute external commands and capture their output.
Before
import subprocess
p = subprocess.Popen(["ls", "-l"], stdout=subprocess.PIPE)
out, err = p.communicate()
if p.returncode != 0:
raise Exception()After
import subprocess
result = subprocess.run(["ls", "-l"], capture_output=True, check=True)
print(result.stdout)Stop fighting floating point errors with math.isclose
Floating-point math is notoriously imprecise (0.1 + 0.2 equals 0.30000000000000004). Developers kept reinventing tolerance-based comparisons with arbitrary epsilons. isclose handles both absolute and relative tolerances cleanly.
Before
if abs(0.1 + 0.2 - 0.3) < 1e-9:
print("Close enough")After
import math
if math.isclose(0.1 + 0.2, 0.3):
print("Mathematically close")Python 3.6 — Python Got Prettier and a bit safer
Official Release Notes
Released on December 23, 2016, this fan-favorite update fundamentally changed how we write code with the introduction of the elegant f-string and variable type annotations.
More readable code
Following 3.5 PEP 484 feature update, with Python 3.6 we can now annotate variable types. This didn’t change how code runs, but it changed how we write it.
from typing import List
prices: List[float] = []You can now also make your large numbers more readable with underscores A tiny feature with a huge impact on readability. It’s now impossible to mistake a million for ten million at a glance.
big_number = 1_000_000_000The holy f-strings
Oh didn’t we all hate the %s syntax for string parsing? It was ugly, confusing and cumbersome.
print("Hello, %s. Age: %d" % (name, age))
print("Hello, {}. Age: {}".format(name, age))With Python 3.6 came f-strings. A faster and significantly more readable solution. Never have I adopted a feature so quickly. They allow you to put expressions directly inside the string, making them the default choice for almost every developer.
After
print(f"Hello, {name}. Age: {age}")Secrets as a more secure alternative to random
Developers used random for passwords and tokens without realizing it wasn’t cryptographically secure. secrets provides a fast, safe alternative that hooks directly into the operating system’s cryptographic random generator.
import secrets
token = secrets.token_urlsafe(32)In practice, secrets is the now the standard for generating unpredictable values such as password-reset tokens, CSRF tokens, and session identifiers. It does not replace password hashing or broader security design, but for secure randomness itself, it is much more appropriate than random.
Stream your data asynchronously with ease
Python 3.5 gave us async/await, but you couldn’t use yield inside an async def or write async comprehensions. Python 3.6 extended the power of generators into the async world, allowing you to stream data asynchronously.
async def fetch_all():
for url in urls:
yield await fetch(url) # Streams one at a time
data = [item async for item in fetch_all()]Let Path objects roam free across the standard library
When pathlib was introduced in 3.4, standard library functions like open() didn’t actually accept Path objects. You had to convert them to strings. Python 3.6 created the os.PathLike protocol, so pathlib finally works everywhere natively.
Before
from pathlib import Path
path = Path('/tmp/file.txt')
with open(str(path)) as f: # Had to convert to string manually
passAfter
from pathlib import Path
path = Path('/tmp/file.txt')
with open(path) as f: # Just works now
passPython 3.7 — Streamlining Data and Debugging
Official Release Notes
Arriving on June 27, 2018, this version slashed boilerplate code with dataclasses and standardized the debugging experience across the entire ecosystem.
Some Nice Streamlining
Dataclasses killed the “boilerplate monster”.
Python now automatically generates __init__, __repr__, and __eq__ methods for you based on type hints.
from dataclasses import dataclass
@dataclass
class User:
name: str
age: intAlso, dictionaries now guarantee order, making OrderedDict irrelevant. This sets another precedent of Python making a past bandage irrelevant with xrange and now OrderedDict. Thus, ensuring the language evolves without being stuck with past decisions.
Finally, a new blazing fast, C-level check to ensure all characters in a string are within the ASCII range (0-127). Critical for safe logging and database constraints where you want to ensure no non-ASCII characters are present.
"café".isascii() # FalseStop typing pdb.set_trace and use breakpoint
Ah the debugger, never used it and probably never will. However, with Python 3.7, it is much easier to use than ever before.
breakpoint() gives us a single, standard way to enter the debugger. It also allows you to swap out the debugger (e.g., to pudb or ipdb) via environment variables without changing your code.
Before
import pdb; pdb.set_trace()After
breakpoint()Let’s be honest we’ll all keeping using print() to debug.
Measure your performance with nanosecond precision
For high-performance profiling and exact benchmarks, Python floats were causing precision loss because of rounding errors on fast machines. The _ns suffix was added to multiple time functions to provide integer accuracy.
import time
# Returns the time as a precise integer representing nanoseconds.
start = time.time_ns()Keep your state safe across async boundaries
Thread-local storage breaks down in async Python because many coroutines can run on the same thread. That means “current request” or “current user” data can no longer safely live in thread-local variables.
contextvars fixes this by giving each async task its own logical context. In practice, this lets web frameworks and logging systems keep request-scoped data—such as a user ID, request ID, or trace ID—without having to pass it through every function call.
import contextvars
# Context-local storage correctly handles async task boundaries.
user_id = contextvars.ContextVar('user_id')
user_id.set(123)Python 3.8 — A controversial release
Official Release Notes
Launched on October 14, 2019, Python 3.8 sparked debate and innovation by introducing the Walrus Operator for inline assignments and granting library authors stricter parameter controls.
Assign and check in one breath with the walrus
While very niche and controversial. It allows you to assign a variable and check its value in the same line. I don’t like it personally, I think it’s not good for readbility and I’ve yet to see other people use it but it exists.
In the Python community, the debate around the walrus (PEP 572) was so intense that Guido van Rossum stepped down from Python leadership, saying he did not want to fight that hard over a PEP again.
Before
match = re.search(pattern, text)
if match:
data = match.group(1)After
if match := re.search(pattern, text): #assigns and checks
data = match.group(1)Protect your API from keyword arguments
This is vital for library authors.
It allows them to change parameter names in the future without breaking the code of people using their library.
Before, if you wanted to prevent users from typing parameters explicitly like func(a=1), you had to do manual checking in the function body.
Now, simply using the slash operator ensures that users must pass those arguments positionally.
Personally, I believe code readability is the most important thing so this feature should be used sparingly.
Before
def func(a, b, **kwargs):
pass
func(a=2, b=3) # worksAfter
def func(a, b, /):
# a and b CANNOT be passed as keywords.
pass
func(a=2, b=3) # raises an errorDebug faster with self-documenting f-strings
A nice quality of life shortcut. It saves you from typing the variable name twice when logging state. It’s an okay feature but I can’t help but notice this release is so far composed of very weird and niche syntax update.
Before
print(f"user={user} score={score}")After
print(f"{user=} {score=}") # Prints 'user=Guido score=99'Cache your properties and save your CPU
A basic cache for your properties. Now, heavy computation runs once on the first access, and subsequent accesses are as fast as a normal attribute lookup.
Before
class Dataset:
@property
def data(self):
if not hasattr(self, '_data'):
self._data = load_heavy_file() # Takes 5 seconds
return self._dataAfter
from functools import cached_property
class Dataset:
@cached_property
def data(self):
return load_heavy_file() # Evaluates once, then caches forever!Shape your data with TypedDict and Literal
Python’s type system matured greatly here. TypedDict allows us to define the strict shape of dictionaries (like JSON responses), and Literal restricts values to exact strings or numbers.
from typing import TypedDict, Literal
class Config(TypedDict):
id: int
mode: Literal["r", "w"] #Defines 'mode' as exactly "r" or "w"Various other features were added
We’ve always had sum(). It only made sense to finally add a native product equivalent that handles math properly and performs at C-speed :
import math
result = math.prod([1, 2, 3, 4]) # 24Similarly, we’ve always had shlex.split() to break command strings into lists. shlex.join() perfectly handles the reverse, safely quoting spaces and special characters.
import shlex
cmd_str = shlex.join(["ls", "-l", "my dir"]) # 'ls -l "my dir"'Python 3.9 — Polishing Types and Dictionaries
Official Release Notes
Released on October 5, 2020, this update refined everyday developer workflows with intuitive dictionary merge operators and native collection type hints.
Merge your dicts with a single pipe
A more intuitive and readable way to merge dictionaries, matching the style of sets.
Before
merged = {**defaults, **overrides}After
merged = defaults | overridesStop importing List and embrace native generics
You no longer need to import List, Dict, or Tuple from the typing module. You can use the built-in collection types directly as type hints.
Before
from typing import List
def process(items: List[int]): ...After
def process(items: list[int]): ...Handle time zones natively with zoneinfo
Python finally has a built-in way to handle IANA time zones without needing third-party libraries, making datetime math much more reliable out of the box.
from zoneinfo import ZoneInfo
eastern = ZoneInfo("US/Eastern")Strip your string edges with surgical precision
Removing affixes without regular expressions used to require tedious slicing. These string methods provide a fast, safe, and intuitive way to strip edges.
url = "https://example.com"
clean = url.removeprefix("https://")Solve your dependency graphs with graphlib
Resolving dependencies (like determining build order for packages) is a complex computer science problem. Having it built into the standard library saves countless hours of debugging bad graph algorithms.
from graphlib import TopologicalSorter
graph = {"task_B": {"task_A"}, "task_C": {"task_B"}}
ts = TopologicalSorter(graph)
print(tuple(ts.static_order())) # ('task_A', 'task_B', 'task_C')Let math handle your least common multiples
Expanding the math library to naturally support the Least Common Multiple (LCM) across multiple arguments. math.gcd existed, but not math.lcm, requiring custom math functions.
import math
print(math.lcm(4, 5, 6)) # 60Python 3.10 — The Pattern Matching Revolution
Official Release Notes
Debuting on October 4, 2021, this massive syntax update brought functional programming flair to Python with highly anticipated structural pattern matching.
Ditch nested ifs for structural pattern matching
When going from C to python, my biggest loss was switch-case statements. It’s been so long I’ve forgotten how much I liked those. With Python 3.10 we finally have an equivalent. match/case allows you to deconstruct complex data structures declaratively. It’s significantly cleaner than nested if statements for handling API responses or ASTs.
Before
if isinstance(data, dict) and "status" in data:
if data["status"] == 200:
if data["body"] == "Success":
# ... handling ...
if data["body"] == "Partial":
# ... handling ...
if data["status"] == 429:
if data["body"] == "Retry":
# ... handling ...After
match data:
case {"status": 200, "body": "Success"}:
# ... handling ...
case {"status": 200, "body": "Partial"}:
# ... handling ...
case {"status": 429, "body": "Retry"}:
# ... handling ...
Clean up your type hints with the union pipe
Simple and Pythonic, it makes type hints look like standard Python logic. It’s cleaner, faster to type, and easier to read.
Before
from typing import Union
def parse(val: Union[int, str]): ...After
def parse(val: int | str): ...Count your bits at C-speed with bit_count
Also known as “population count” or popcount. Doing this via string manipulation was incredibly slow; it’s now a blazing fast native C function.
Before
count = bin(42).count('1') # Creating strings to do math!After
count = (42).bit_count() # 3Fail fast on mismatched iterables with zip(strict=True)
A massive win for data integrity. The strict=True parameter ensures your parallel loops crash loudly rather than silently ignoring mismatching data.
Before
# Silent data loss if lists are unequal!
list(zip([1, 2, 3], ['A', 'B'])) # [(1, 'A'), (2, 'B')] - 3 is silently ignored!After
list(zip([1, 2, 3], ['A', 'B'], strict=True)) # Raises ValueErrorIdentify the standard library without guesswork
Essential for linters, formatters, and tooling that needs to differentiate between pip-installed packages and built-in Python modules without relying on hardcoded lists.
import sys
"json" in sys.stdlib_module_names # True because json is a standard libPython 3.11 — The Speed and Safety Upgrade
Official Release Notes
Dropping on October 24, 2022, this version not only delivered unprecedented performance boosts but also revolutionized how we handle concurrent errors with exception groups.
Handle a swarm of errors with exception groups
Before, if 10 async tasks failed, you’d usually only see the error for the first one.
Now we have except* and ExceptionGroup. This allows concurrent code to report multiple failures simultaneously, making it much easier to debug complex async or multi-threaded applications.
async def task1():
raise ValueError("Invalid user ID")
async def task2():
raise ValueError("Wrong data type")
try:
async with asyncio.TaskGroup() as tg:
tg.create_task(task1())
tg.create_task(task2())
except* ValueError as eg: #eg is an ExceptionGroup which is iterable
print("Handled ValueError(s)")
for e in eg.exceptions:
print(" -", e)Parse your pyproject.toml natively with tomllib
TOML has become the de facto configuration language for Python tools. Including a lightning-fast native parser ensures the Python ecosystem doesn’t need external dependencies just to bootstrap itself.
import tomllib
with open("pyproject.toml", "rb") as f:
config = tomllib.load(f)Orchestrate your tasks with TaskGroup
TaskGroup revolutionized async safety. It provides structured concurrency, ensuring that background tasks are strictly managed, awaited, or cleanly shut down if errors occur.
Before
# With gather(), if one task fails, the others continue to run until they
# complete or are explicitly cancelled, potentially wasting resources.
results = await asyncio.gather(task1(), task2())After
async with asyncio.TaskGroup() as tg:
task1 = tg.create_task(do_work())
task2 = tg.create_task(do_work())
# TaskGroup guarantees that if one task fails, all other remaining tasks
# in the group are automatically cancelled.Enrich your errors with add_note
You can now add helpful context to an error without changing the original exception type or losing the stack trace.
Before
try:
raise ValueError("Bad")
except ValueError as e:
# You'd have to wrap it in a new exception to add info
raise ValueError(f"Context: {e}") from eAfter
except ValueError as e:
e.add_note("Check your API key in .env")
raiseType your fluent APIs elegantly with Self
Self means “an instance of this class.” It is especially useful for methods like copy(), builders, or fluent APIs that return self, because type checkers can understand that the return type should stay tied to the actual class.
That is particularly helpful with inheritance: if a subclass calls copy(), the result is inferred as the subclass, not just the parent class. Before Self, preserving that behavior required a more verbose TypeVar pattern.
Before
from typing import TypeVar
T = TypeVar("T", bound="MyClass")
class MyClass:
def copy(self: T) -> T: ... # Very verboseAfter
from typing import Self
class MyClass:
def copy(self) -> Self: ...Python 3.12 — Unleashing Generics and f-strings
Official Release Notes
Released on October 2, 2023, this update made type hinting feel like a native language feature and removed the historical limitations of f-string formatting.
Write generics that look like real code
Generics now feel like a native language feature rather than an imported hack. It’s cleaner and more intuitive for anyone coming from languages like Java or TypeScript.
Before
from typing import TypeVar
T = TypeVar("T")
def first(l: list[T]) -> T: ...After
def first[T](l: list[T]) -> T: ...Break free from f-string quote restrictions
The “Quote Nightmare”: you couldn’t use the same quotes inside as outside Also, no comments allowed inside the braces. f-strings no longer have arbitrary restrictions. They are now parsed as full Python expressions, allowing for much more natural code.
Before
print(f"Songs: {', '.join(songs)}") # Had to be carefulAfter
# Use any quotes, add comments, write multiline logic.
print(f"Songs: {
', '.join(songs) # Comments are now okay!
}")Batch your iterables without the manual math
Batching iterables is extremely common (e.g., hitting an API 50 IDs at a time). batched provides an efficient, built-in C-level tool that works natively on any iterable, not just lists.
Before
# The manual chunking era.
chunk_size = 3
for i in range(0, len(data), chunk_size):
chunk = data[i:i + chunk_size]After
from itertools import batched
for chunk in batched(data, 3):
pass # Process 3 items at a time cleanly.Protect your method overrides with @override
Coming from languages with native override keywords, this decorator ensures object-oriented hierarchies don’t break silently when you rename a parent method.
Before
class Parent:
def process(self): pass
class Child(Parent):
def proces(self): pass # Typo! But it fails silently, and parent logic runs instead.After
from typing import override
class Child(Parent):
@override
def proces(self): pass # Type checker immediately yells at you!Walk your directories the object-oriented way
The final nail in the coffin for os.walk. Fully object-oriented directory crawling is finally here.
Before
import os
from pathlib import Path
# os.walk yields strings, so you must wrap them back into Path objects manually.
for root, dirs, files in os.walk(directory):
path = Path(root) / files[0]After
from pathlib import Path
# Everything yielded is natively a Path object!
for root, dirs, files in Path(directory).walk():
path = root / files[0]Python 3.13 — Unlocking True Concurrency
Official Release Notes
Released on October 7, 2024, this historic milestone finally began phasing out the Global Interpreter Lock (GIL) and completely modernized the default interactive shell.
Enjoy a shell that actually likes you
The new shell makes interactive Python feel like a modern tool, with better help prompts and a much smoother developer experience.
Before
# Basic, no colors, annoying indentation, exit() required.
>>> exit()After
# Colorful, multi-line editing, smart history, exit just works.
>>> exitEmbrace the multi-core future without the GIL
Oh the GIL, what a story.. When Python was created, single-core processors were the norm and multi-threading was only an academic idea still. Thus, the way python was created initially didn’t allow for true concurrent processing.
Instead, Python faked multi-threading. It worked for I/O bound tasks but true CPU bound tasks couldn’t be multi-threaded.
As the language evolved this limitation became harder and harder to remove but also harder to justify. After much work and deliberation, Python 3.13 added experimental support for a free-threaded build.
While the GIL remains in the standard build, the experimental build allows running threads in parallel on multiple cores. You can disable the GIL at runtime using -X gil=0 or by setting the PYTHON_GIL=0 environment variable.
# On a free-threaded build:
python3.13 -X gil=0 script.pySimplify your generics with default types
You had to define multiple overloads if you wanted a default type. Now, Type Parameter Defaults simplifies library design by allowing generic classes to have a sane default type if none is provided.
After
T = TypeVar("T", default=str)Update your immutable objects with a standard API
Reconciles the fragmented APIs across dataclasses, namedtuple, and custom objects into one standard interface for immutable object modification.
Before
from dataclasses import replace
# For dataclasses, you used `replace`. For namedtuples, you used `_replace`.
new_obj = replace(obj, status="done")After
import copy
# A single standard API for copying and replacing fields.
new_obj = copy.replace(obj, status="done")Python 3.14 — Smarter Memory and Safer Strings
Official Release Notes
Launching on October 7, 2025, this architectural leap introduces template strings to safely handle raw data injections and smooths out application performance with Incremental Garbage Collection.
Stop quoting your classes and use deferred annotations
Python now defers the evaluation of type hints by default. This fixes “circular reference” issues and speeds up module imports. Before
# You had to use strings if a class referenced itself in its own methods.
class Node:
def __init__(self, next: "Node"): ...After
class Node:
def __init__(self, next: Node): ... # No strings neededHandle raw data safely with t-strings
f-strings immediately turn everything into a string. This can be dangerous for SQL or HTML if not handled carefully.
Before
query = f"SELECT * FROM users WHERE id = {user_id}" t-strings (Template strings) return Template objects that allow library authors (like SQLAlchemy or Jinja) to receive the raw template and the variables separately. This lets downstream libraries process interpolations safely to prevent injection risks.
After
query = t"SELECT * FROM users WHERE id = {user_id}"
# query is a Template object, not a string.Compress with Meta-level speed using Zstandard
Python 3.14 added a new unified compression package. While old modules like gzip still exist and are not deprecated for at least five years, the new package provides a more consistent API. Most importantly, it added native support for Zstandard, the ultra-fast modern compression algorithm created by Meta.
from compression import zstd
compressed = zstd.compress(b"Hello World" * 100)No more relying on third-party bindings for one of the most important compression formats on the web.
Clean up your multi-exception catch blocks
If you wanted to catch multiple exceptions, you had to wrap them in a tuple. In Python 3.14, you can finally drop the parentheses if you aren’t using the as keyword.
Before
try:
connect()
except (TimeoutError, ConnectionRefusedError):
print("Network is down!")After
try:
connect()
except TimeoutError, ConnectionRefusedError:
print("Network is down!")Wait, does this look like Python 2? Yes! Python 2 used commas to bind variables (except Exception, e), which was confusing. Python 3 fixed that with as. Now that as is strictly enforced for variable binding, the comma is safely returned to its rightful job: separating a list of types.
Smooth out your stutters with incremental GC
Python’s Garbage Collector (GC) used to run in a “stop-the-world” fashion. When it collected cyclic memory, your entire application would pause. For web servers or video games, this caused noticeable micro-stutters. Python 3.14 introduces Incremental GC, which breaks the collection process into tiny steps, drastically reducing pause times and keeping high-performance apps smooth.
Python 3.15 — The Ultimate Efficiency Tuning (Prerelease/Draft)
Note: This section is based on current drafts and prerelease proposals. The final version is targeted for October 1, 2026, so this is not yet settled history and details may change.
Slated for late 2026, this forward-looking release promises to drastically speed up application boot times with lazy imports and overhaul performance tracking with the new Tachyon profiler.
Speed up your startup with lazy imports
Before Every import at the top runs immediately, slowing down startup.
import heavy_library After Module is only loaded when you actually use it.
lazy import heavy_libraryEssential for CLI tools and Frameworks where flexibility is critical. No longer will we have to import inside conditions. Note that explicit lazy imports have specific usage restrictions to ensure compatibility.
Lock your mappings down with frozendict
Python 3.15 introduces a new built-in frozendict type. This provides a standard, hashable and immutable mapping type, perfect for configuration and as keys in other dictionaries. Before, you had to use MappingProxyType or third-party libraries.
After
settings = frozendict({"id": "123"})Flatten your lists in a single comprehension
Flattening nested lists or combining multiple generators has always required either itertools.chain() or writing a confusing double-loop list comprehension ([x for sublist in mainlist for x in sublist]). Python 3.15 introduces the * and ** unpacking operators directly inside comprehensions.
Before
lists = [[1, 2], [3, 4], [5]]
# The "for x in L for L in lists" (or the other way around)?
flattened = [x for L in lists for x in L]
# Or importing a tool:
import itertools
flattened = list(itertools.chain.from_iterable(lists))After
lists = [[1, 2], [3, 4], [5]]
flattened = [*L for L in lists] # [1, 2, 3, 4, 5]This single feature saves developers from the most common stack overflow search in Python history: “how do I flatten a list of lists?” It even works with dictionaries: {**d for d in dicts}!
Profile your production code with Tachyon
Python’s standard profilers (cProfile and profile) use “deterministic tracing,” which means they record every single function call. This is accurate, but adds massive overhead to your code, often slowing it down so much that the profile becomes inaccurate for production debugging. Python 3.15 introduces a dedicated profiling package alongside a new built-in statistical sampling profiler named Tachyon.
Before
python -m cProfile script.py
# Slows down the script significantly, skewing real-time performance metrics.After
python -m profiling.sampling run script.py
# Samples the call stack at high frequency, giving accurate metrics with extremely low overhead.Keep your math pure with the integer module
As integer mathematics becomes more important for cryptography and large-scale data, the generic math module (which focuses on floating point) needed a sibling. Python 3.15 introduces math.integer for pure integer mathematical operations.
Looking Back to Look Forward
Python really has come a long way and this was only a non-exhaustive list of the major features of Python 3. I hope you found some interesting features that you didn’t know about because I sure did! If you’re still on an older version, I have only one piece of advice: upgrade. The water is fine, the code is prettier, and it’s only getting better (if you ignore the Walrus).
- Python Release History (2026): From 3.0 to 3.15 What Changed? - 15 March 2026
- How to Setup Azure SSO with FastAPI: A Complete Guide - 18 October 2025
- Why is the AI revolution so slow? (It’s not) - 18 September 2025