CS188 Notes 1 - Constraint Satisfaction Problems (CSPs)
Notes from UC Berkeley's CS188 course on Artificial Intelligence.
Note:#
This is a work in progress. I will be adding more notes and examples as I go through the course. The course is available on the Berkeley website ↗.
Note that my notes are based on the Spring 2025 version of the course, and my understanding of the material. So they MAY NOT be 100% accurate or complete. Also, THIS IS NOT A SUBSTITUTE FOR THE COURSE MATERIAL. I would only take notes on parts of the lecture that I find interesting or confusing. I will NOT be taking notes on every single detail of the lecture.
I will begin my notes with Lec.4 (CSPs I) and continue from there.
CS188: Lecture 4 - Constraint Satisfaction Problems (CSPs)#
The goal is to find a complete assignment (every variable has a value from its domain) such that all constraints are satisfied. CSPs are a special kind of search problem where the path to the goal doesn’t matter, only the final state.
Backtracking Search#
The fundamental algorithm for solving CSPs systematically is Backtracking Search. It works as follows:
- Start with an empty assignment.
- Select an unassigned variable.
- Try assigning a value from its domain.
- Check if this assignment violates any constraints with already assigned variables.
- If no violation, recursively call backtracking for the next variable. If the recursive call succeeds, we’re done (or continue if finding all solutions).
- If violation, or if the recursive call returns failure, try the next value for the current variable.
- If all values for the current variable have been tried and failed, backtrack: return failure to the previous call, forcing it to try a different value.
This explores the space of partial assignments in a depth-first manner. While complete (guaranteed to find a solution if one exists), basic backtracking can be very slow.
Filtering (Constraint Propagation)#
Filtering techniques aim to prune the search space before or during backtracking by removing values from domains that cannot possibly lead to a solution.
-
Forward Checking: When a variable
Xis assigned a valuev, look at all unassigned neighboring variablesYconnected toXby a constraint. Remove any valueyfromY’s domain that is inconsistent withX=v.- Why: Simple, relatively cheap check that prevents immediate failures down the line.
- Limitation: It only checks constraints between the newly assigned variable and its future neighbors. It doesn’t detect inconsistencies between two unassigned variables, even if their domains have been reduced (e.g., if both NT and SA are reduced to only {Blue}, Forward Checking won’t notice the NT-SA conflict until one of them is assigned).
-
Arc Consistency (2-Consistency): An arc
X -> Yis consistent if for every valuexremaining inX’s domain, there exists at least one valueyremaining inY’s domain such that(x, y)satisfies the constraint betweenXandY. So you could think of it as a “two-way” check, a update of the previous mentioned Forward Checking.- How (AC-3 Algorithm Idea): Maintain a queue of all arcs. While the queue is not empty, pop an arc
X -> Y. Check if it’s consistent. If not, remove the inconsistent value(s)xfromX’s domain (“delete from the tail”). Crucially: If any value was removed fromX, add all arcsZ -> X(whereZis a neighbor ofX, other thanY) back into the queue, because the removal might make some values inZinconsistent. Repeat until the queue is empty (no more values can be removed). - Why: More powerful than Forward Checking. It propagates constraints between variables, potentially detecting failures much earlier (like the NT-SA {Blue} conflict). Can be used as preprocessing or maintained during search. However, it is more computationally expensive than Forward Checking, but it is often worth it.
- How (AC-3 Algorithm Idea): Maintain a queue of all arcs. While the queue is not empty, pop an arc
-
K-Consistency & Strong K-Consistency: Generalizes consistency checks to
kvariables. K-Consistency means any consistent assignment tok-1variables can be extended to ak-th variable. 1-Consistency = Node Consistency (unary constraints). 2-Consistency = Arc Consistency. 3-Consistency = Path Consistency.Strong K-Consistency: Means the CSP is J-Consistent for all
Jfrom 1 toK.- Fact: Strong n-Consistency (where n is the number of variables) guarantees a solution can be found without backtracking.
- My misunderstanding: Why “Strong”? Because the backtrack-free construction process requires the guarantee at every step
k. Stepkrequires k-Consistency assuming the firstk-1assignments were consistent. Plain n-Consistency only guarantees the last step (n-1 to n) works, but doesn’t guarantee the intermediate steps (like 2 to 3) are possible if the problem isn’t also 3-Consistent, etc. A problem could be n-Consistent (vacuously, if no consistent n-1 assignments exist) but fail lower levels of consistency, requiring backtracking or even having no solution. Strong n-Consistency ensures all necessary intermediate guarantees hold.
Speeding Up Backtracking#
These heuristics don’t prune the search space but guide the backtracking search to potentially find solutions faster or detect failures earlier.
-
Variable Ordering: Minimum Remaining Values (MRV): Choose the next unassigned variable that has the fewest legal values left in its domain.
- Why (“Fail-Fast”): If a variable has 0 values, failure is detected immediately. If it has 1 value, it’s forced, simplifying the problem. Variables with few values are often bottlenecks; dealing with them early is likely to prune large parts of the search tree quickly if they lead to failure. Also called “most constrained variable”.
-
Value Ordering: Least Constraining Value (LCV): Once a variable is selected (e.g., by MRV), try assigning values from its domain in an order. Choose the value that rules out the fewest values in the domains of neighboring unassigned variables.
- Why (“Succeed-First”): Tries to keep options open for the future, increasing the chance that the current path leads to a solution without immediate backtracking. It prioritizes choices that seem less likely to cause conflicts later.
MRV and LCV often work very well together.