Spring 2026
This course examines the core methodologies and applications of computational imaging with an emphasis on solving inverse problems such as image restoration, filtering, single‑pixel imaging, neural rendering and light‑field imaging. Representative case studies—including denoising and deconvolution—span a broad spectrum of approaches from classical image processing algorithms to modern data‑driven methods based on generative models. Students will learn how to combine optimization theory with deep learning through proximal‑gradient techniques such as half‑quadratic splitting (HQS) and the alternating direction method of multipliers (ADMM).
Course assignments include Python‑based programming and image processing implementations. These hands‑on exercises develop practical skills for applying theoretical concepts to real‑world computational imaging problems. A term project provides an opportunity to explore a topic of interest in greater depth.
Topics include:
Lectures: Tuesdays and Thursdays, 3:30–4:45 pm in Rm 209, Bldg 302.
Instructor office hours: Tuesdays 4:45-5:30 pm in Rm 321, Bldg 302 starting on March 3; discussion about projects, lecture material, etc.
TA office hours: Thursdays 5:00-6:00 pm in Rm 416, Bldg 301; starting on March 12; questions about homework, labs and lectures.
Textbook: There is no textbook for the course; links to readings and course notes are provided in the Schedule.
Contact: Course announcements and general information will be posted on the course forum on eTL. Please make a private eTL post if you need to contact the instructors directly.
The table below outlines the weekly topics and key milestones.
| Wk | Date | Event | Description | Files | Readings |
|---|---|---|---|---|---|
| 1 | 3/3 | Lecture 1 | Introduction and fast forward Overview of class, logistics, discussion of project ideas |
||
| 3/5 | Lecture 2 | The human visual system perception of color, depth, contrast, resolution, and more |
|||
| 2 | 3/10 | Lecture 3 | Math review sampling, optimization, deconvolution, and related topics |
||
| 3/12 | Lecture 4 | Image formation natural and linear perspective, pinholes and lenses |
|||
| 3/13 | HW 1 out | ||||
| 3 | 3/17 | Lecture 5 | Digital photography I ray optics, aperture, depth of field, exposure, sensor noise |
||
| 3/19 | Lecture 6 | Digital photography II camera ISPs, demosaicking, denoising, deconvolution |
|||
| 3/20 | HW 1 due | Homework 1 due at 11:59 pm | |||
| 4 | 3/24 | Lecture 7 | Great ideas in computational photography HDR, tone mapping, coded apertures, flutter shutter |
||
| 3/26 | Lecture 8 | Introduction to neural networks MLPs, CNNs, ResNets, denoising with CNNs |
|||
| 3/27 | HW 2 out | ||||
| 5 | 3/31 | Lecture 9 | Solving inverse problems with neural networks UNet, deconvolution with CNNs, and more |
||
| 4/2 | Lecture 10 | Image deconvolution with HQS natural image priors and half‑quadratic splitting |
|||
| 4/3 | HW 2 due | Homework 2 due at 11:59 pm | |||
| 6 | 4/7 | Lecture 11 | Survey on final project topics | ||
| 4/9 | Lecture 12 | Solving regularized inverse problems with ADMM single‑pixel imaging and general inverse problems |
|||
| 4/10 | HW 3 out | ||||
| 7 | 4/14 | Lecture 13 | Introduction to diffusion models score‑based generative modelling and image generation |
||
| 4/16 | Lecture 14 | Solving inverse problems with diffusion model‑based priors | |||
| 4/17 | Project proposal & HW 3 due | Project proposal and Homework 3 due at 11:59 pm | |||
| 8 | 4/21 | Lecture 15 | Focal stacks and depth from (de)focus point spread function, depth from focus/defocus |
||
| 4/23 | Lecture 16 | Light field imaging plenoptic function, light field cameras, 3D displays |
|||
| 4/24 | HW 4 out | ||||
| 9 | 4/28 | Lecture 17 | Light transport I scene reflectance and photometry, shape from intensity, applications in graphics |
||
| 4/30 | Lecture 18 | Light transport II forward and inverse light transport, light transport matrix |
|||
| 5/1 | HW 4 due | Homework 4 due at 11:59 pm | |||
| 10 | 5/5 | No class | Children’s Day (어린이날) | ||
| 5/7 | Lecture 19 | Time‑of‑flight imaging lidar, non‑line‑of‑sight imaging, ultrafast imaging |
|||
| 11 | 5/12 | Midterm | In‑class exam | ||
| 5/14 | Lecture 20 | Differentiable physics differentiable optics, simulators, rendering, surrogate gradients |
|||
| 12 | 5/19 | Lecture 21 | Neural representations data structures, scene representations, Neural Radiance Fields, 3D Gaussian splatting |
||
| 5/21 | Lecture 22 | Introduction to wave optics and nano optics free‑space propagation, diffractive optical elements, metasurfaces, diffraction limit, lensless imaging |
|||
| 13 | 5/26 | Lecture 23 | Deep optics end‑to‑end optimization of optics and image processing using AI |
||
| 5/28 | Lecture 24 | Phase retrieval and holography phase retrieval, Fourier ptychography, computer‑generated holography |
|||
| 14 | 6/2 | Final project | Work on final project | ||
| 6/4 | Final project | ||||
| 15 | 6/8 | Final project due | |||
| 6/9 | Final project presentations | ||||
| 6/11 | Final project presentations |
All assignments and the final project proposal and report should be submitted on eTL. If you work as a team, make sure to indicate your team member in the submission.
There will be 4 homework assignments (see Schedule) in this class. These assignments will contain some theoretical questions and also implementations of techniques that we will discuss in class. Please refer to assignment writeups (available in the Files tab in Schedule) for details. After you finish, submit your code and report on eTL.
Collaboration Policy: Students are permitted to work together on homework assignments in small groups. However, while students can discuss the assignments together, they must compose their solutions individually. Students must not view or copy code of other students or solutions available on the internet. Names of collaborators must be listed on submitted assignments. Submitting plagiarized solutions is an academic offense and can have severe penalties.
AI Assistants Policy: Generative AI may be used as a supplementary learning tool in this course. Students may use generative AI to explore ideas or topics needed to complete assignments and to analyze data. However, if generative AI is used to complete an assignment, the name of the tool used, the process of its use, and the generated content must be clearly stated in the assignment.
There will be an in-class midterm (80 mins long) on May 12.
The final project grade takes into account your poster presentation (organization of poster, clarity of presentation, ability to answer questions), your source code submission (code organization and documentation), and your final project report (appropriate format and length, abstract, introduction, related work, description of your method, quantitative and qualitative evaluation of your method, results, discussion & conclusion, bibliography).
You can work in teams of up to 2 students for the project. Submit only one proposal and final report for each team. The expected amount of work is relative to the number of team members, so if two teams work on a similar project, we'd expect less work from a smaller team. Before you start to work on the proposal or the report, take a look at some of the past project proposals and reports to get a sense for what's expected (see link at the bottom of this page).
The project proposal is a 1–2 page document that should contain the following elements: clear motivation of your idea, a discussion of related work along at least 3 scientific references (i.e., scientific papers not blog articles or websites), an overview of what exactly your project is about and what the final goals are, milestones for your team with a timeline and intermediate goals. Once you send us your proposal, we may ask you to revise it.
The final project report should look like a short (~6 pages) conference paper. We expect the following sections, which are standard practice for conference papers: abstract, introduction, related work, theory (i.e., your approach), analysis and evaluation, results, discussion and conclusion, references. To make your life easier, we provide a LaTeX template that you can use to get started on your report (see schedule for link).
This course is adapted from the Computational Imaging course designed by Gordon Wetzstein and offered at Stanford University (EE367) and University of Toronto (CSC2529, by David Lindell). Below you can find links to pinhole camera photos and course projects from these previous iterations of the course.
Links to notable pinhole camera photos:
The course is adapted from EE367 at Stanford University by Gordon Wetzstein. Some of the materials used in class build on that from other instructors, including Yannis Gkioulekas, Marc Levoy, Fredo Durand, Ramesh Raskar, Shree Nayar, Paul Debevec, Matthew O'Toole, David Lindell and others, as noted in the slides. Feel free to use these slides for academic or research purposes, but please maintain all acknowledgments. This webpage is based on the website for CS231N and EE367 at Stanford University.
The course banner is taken from here (Shutterstock/denniro).