
Overview
Python is the backbone of modern analytics and machine learning. In this track we build practical fluency: pulling data via APIs, exploring and communicating with visualization libraries, structuring problems with sound feature and modelling thinking, training and evaluating ML models, and taking first steps toward deployment (scripts, APIs, or hosted services). When your goals include language or generative workflows, we add NLP and AI topics at the right depth — from classical text features to contemporary toolchains.
Who this is for
Career switchers, analysts adding code to their toolkit, students with dissertation or capstone needs, and developers who need statistics and ML explained in a grounded, project-first way.
Curriculum
A practical path through data, models, and deployment
Whether you are wrangling CSVs, training models, or calling AI APIs, the aim is code you can read, reuse, and explain months later—grounded in projects that match your goals.
- Python built for real analysis work. Reliable environments on your machine, loading and cleaning tabular data, working with numerical arrays efficiently, and organising notebooks or scripts so teammates can read and reuse your work.
- Pulling data from APIs. Using HTTP clients, parsing JSON, light scheduling patterns, and stitching API responses into a coherent analysis pipeline.
- Visualisation that supports the story. Exploratory charts for understanding the data and a small set of polished figures you would happily put in a slide deck or report.
- Modelling discipline. Train and validation splits, metrics that match the business question, and the common leakage mistakes that quietly invalidate results—plus how to avoid them.
- Machine learning workflows. End-to-end flows with a mainstream Python toolkit, a practical tour of algorithms, hyperparameters explained in plain terms, and the basics of interpreting what your model is doing.
- From notebook toward production. Packaging a model, serving predictions behind a small HTTP API when appropriate, and what engineering teams typically ask for before a model is considered live.
- NLP and AI where they add value. Text preprocessing, a clear-eyed view of embeddings, and choosing modern cloud APIs or libraries only when they genuinely fit your project.
Outcomes you can aim for
You should be able to take a messy dataset from ingest to a documented notebook or script, train a baseline model with honest evaluation, and articulate limitations and next steps — plus a roadmap if you want to go deeper into deep learning or MLOps later.
How sessions work
Sessions are project-driven whenever possible: your dataset or a close substitute, with homework-sized tasks between meetings so skills compound.