Computed Properties in Rust


Introduction

Computed properties dynamically calculate values when accessed instead of storing them. While languages like Swift and JavaScript support them natively, Rust requires explicit patterns. This guide covers five approaches to replicate computed properties in Rust, including thread-safe solutions for concurrent code.

In Swift, a computed property recalculates its value on access:

struct Rectangle {
    var width: Double
    var height: Double

    var area: Double { // Computed property
        width * height
    }
}

let rect = Rectangle(width: 10, height: 20)
print(rect.area) // 200

Rust doesnโ€™t support this syntax, but we can achieve similar results with methods and caching strategies.

Using Getter Methods (No Caching)

๐Ÿ“Œ Best for: Simple calculations or frequently changing values.

In Rust, the most straightforward way to emulate a โ€œcomputed propertyโ€ is to write a getter method that calculates the value on each call.

๐Ÿฆ€ Rust Implementation

#[derive(Debug)]
struct Rectangle {
    width: f64,
    height: f64,
}

impl Rectangle {
    fn area(&self) -> f64 {
        self.width * self.height
    }
}

fn main() {
    let rect = Rectangle { width: 10.0, height: 20.0 };
    println!("Area: {}", rect.area()); // 200.0
}

๐Ÿ‘ Pros:

  • Always up-to-date.
  • No dependencies.
  • Zero overhead for caching or locking.

๐Ÿ‘Ž Cons:

  • Recomputed on every call (no caching).

Using Lazy Computation with OnceLock (Efficient Caching)

๐Ÿ“Œ Best for: Immutable data with expensive computations.

Rustโ€™s OnceLock lets you lazily compute a value one time. Once written, you cannot reset or invalidate it โ€” perfect for data that never changes.

๐Ÿฆ€ Rust Implementation

use std::sync::{Arc, OnceLock};
use std::thread;

#[derive(Debug)]
struct Rectangle {
    width: f64,
    height: f64,
    cached_area: OnceLock<f64>,
}

impl Rectangle {
    fn new(width: f64, height: f64) -> Self {
        Self { width, height, cached_area: OnceLock::new() }
    }

    fn area(&self) -> f64 {
        *self.cached_area.get_or_init(|| {
            println!("Computing area...");
            self.width * self.height
        })
    }
}

fn main() {
    // Create the Rectangle in a single-threaded context.
    let mut rect = Rectangle::new(10.0, 20.0);

    // Compute area (first time, triggers computation).
    println!("First call: {}", rect.area()); // Computes and caches
    // Use cached value
    println!("Second call: {}", rect.area()); // Uses cached value

    // Modify width but does NOT invalidate the cache.
    rect.width = 30.0; // Has no effect on cached area

    // Prove that area() is still the cached value.
    println!("After modifying width: {}", rect.area()); // Still 200, not 600

    // Move rect into an Arc when we need multi-threading.
    let rect = Arc::new(rect);

    // Proving Thread-Safety
    let rect_clone = Arc::clone(&rect);
    let handle = thread::spawn(move || {
        println!("Thread call: {}", rect_clone.area());
    });

    handle.join().unwrap();

    println!("Final call: {}", rect.area());
}

๐Ÿ–จ๏ธ Expected Output

Computing area...
First call: 200
Second call: 200
After modifying width: 200
Thread call: 200
Final call: 200

๐Ÿ‘ Pros:

  • Thread-safe once enclosed in Arc.
  • Zero overhead after first initialization.

๐Ÿ‘Ž Cons:

  • No invalidation: once set, remains forever.
  • Only for immutable data (or if you never need to re-compute).

Mutable Caching with RefCell

๐Ÿ“Œ Best for: Single-threaded mutable data, where the computed value can be invalidated or re-computed multiple times.

Rustโ€™s interior mutability pattern allows us to store a cache (such as an Option<f64>) behind an immutable reference. RefCell<T> enforces borrowing rules at runtime rather than compile time.

๐Ÿฆ€ Rust Implementation

use std::cell::RefCell;
use std::sync::atomic::{AtomicUsize, Ordering};

static COMPUTE_COUNT: AtomicUsize = AtomicUsize::new(0);

#[derive(Debug)]
struct Rectangle {
    width: f64,
    height: f64,
    // Cache stored in RefCell for interior mutability
    cached_area: RefCell<Option<f64>>,
}

impl Rectangle {
    fn new(width: f64, height: f64) -> Self {
        Self { width, height, cached_area: RefCell::new(None) }
    }

    fn area(&self) -> f64 {
        let mut cache = self.cached_area.borrow_mut();
        match *cache {
            Some(area) => {
                println!("Returning cached area: {}", area);
                area
            }
            None => {
                println!("Computing area...");
                let area = self.width * self.height;
                // Only for debugging purposes to track how many times the area is actually computed.
                COMPUTE_COUNT.fetch_add(1, Ordering::SeqCst);
                *cache = Some(area);
                area
            }
        }
    }

    fn set_size(&mut self, width: f64, height: f64) {
        println!("Updating dimensions and clearing cache...");
        self.width = width;
        self.height = height;
        self.cached_area.replace(None); // Invalidate the cache
    }

    fn invalidate_cache(&self) {
        println!("Invalidating cache...");
        self.cached_area.replace(None);
    }
}

fn main() {
    let mut rect = Rectangle::new(10.0, 20.0);

    println!("First call: {}", rect.area()); // Computes
    println!("Second call: {}", rect.area()); // Cached

    rect.set_size(15.0, 25.0); // Mutates and invalidates cache
    println!("After resize: {}", rect.area()); // Recomputes

    rect.invalidate_cache(); // Manually invalidate cache
    println!("After cache invalidation: {}", rect.area()); // Recomputes

    println!("Times computed: {}", COMPUTE_COUNT.load(Ordering::SeqCst)); // Should be 3
}

๐Ÿ–จ๏ธ Expected Output

Computing area...
First call: 200
Returning cached area: 200
Second call: 200
Updating dimensions and clearing cache...
Computing area...
After resize: 375
Invalidating cache...
Computing area...
After cache invalidation: 375
Times computed: 3

๐Ÿ‘ Pros:

  • Handles mutable data.
  • Explicit invalidation available.

๐Ÿ‘Ž Cons:

  • Not thread-safe.
  • Runtime borrow checks add overhead.

Thread-Safe Caching with Mutex

๐Ÿ“Œ Best for: Shared data across threads, when updates or caching need exclusive access.

For multi-threaded scenarios, we can wrap our cache in a Mutex<Option<f64>>. The Mutex enforces mutual exclusion, meaning only one thread can compute or update the cache at a time.

๐Ÿฆ€ Rust Implementation

use std::sync::{Arc, Mutex};
use std::thread;

struct Rectangle {
    width: f64,
    height: f64,
    cached_area: Mutex<Option<f64>>,
}

impl Rectangle {
    fn new(width: f64, height: f64) -> Self {
        Self { width, height, cached_area: Mutex::new(None) }
    }

    fn area(&self) -> f64 {
        let mut cache = self.cached_area.lock().unwrap();
        match *cache {
            Some(area) => area,
            None => {
                println!("Computing area...");
                let area = self.width * self.height;
                *cache = Some(area);
                area
            }
        }
    }
}

fn main() {
    let rect = Arc::new(Rectangle::new(10.0, 20.0));
    let mut handles = vec![];

    // Spawn 4 threads
    for _ in 0..4 {
        let rect = Arc::clone(&rect);
        handles.push(thread::spawn(move || {
            println!("Area: {}", rect.area());
        }));
    }

    for handle in handles {
        handle.join().unwrap();
    }
}

๐Ÿ–จ๏ธ Expected Output

Computing area...
Area: 200
Area: 200
Area: 200
Area: 200

๐Ÿ‘ Pros:

  • Thread-safe.
  • Computes once across threads.

๐Ÿ‘Ž Cons:

  • Locking overhead (all threads block during the write).

Optimized Reads with RwLock

๐Ÿ“Œ Best for: Read-heavy workloads with rare writes (e.g., many threads reading the cached value concurrently).

RwLock allows multiple readers or one writer. This can reduce contention if reading is far more common than writing.

๐Ÿฆ€ Rust Implementation

use std::sync::{Arc, RwLock};
use std::thread;
use std::time::Duration;

struct Rectangle {
    width: f64,
    height: f64,
    cached_area: RwLock<Option<f64>>,
}

impl Rectangle {
    fn new(width: f64, height: f64) -> Self {
        Self {
            width,
            height,
            cached_area: RwLock::new(None),
        }
    }

    fn area(&self, thread_id: usize) -> f64 {
        // Attempt fast read path
        {
            let cache = self.cached_area.read().unwrap();
            if let Some(area) = *cache {
                println!("[Thread {thread_id}] Read cached value: {area}");
                return area;
            }
        } // Explicitly drop read lock here

        // Slow write path: block all reads while computing
        println!("[Thread {thread_id}] Cache miss. Acquiring write lock...");
        let mut cache = self.cached_area.write().unwrap();

        // Another thread might have written while we waited for write lock
        if let Some(area) = *cache {
            println!("[Thread {thread_id}] Another thread cached: {area}");
            return area;
        }

        println!("[Thread {thread_id}] Computing area...");
        thread::sleep(Duration::from_secs(2)); // Simulate slow computation
        let area = self.width * self.height;
        *cache = Some(area);

        println!("[Thread {thread_id}] Cached area: {area}");
        area
    }
}

fn main() {
    let rect = Arc::new(Rectangle::new(10.0, 20.0));
    let mut handles = vec![];

    for i in 0..4 {
        let rect = Arc::clone(&rect);
        handles.push(thread::spawn(move || {
            println!("[Thread {i}] Started");
            let result = rect.area(i);
            println!("[Thread {i}] Computed area: {result}");
        }));
    }

    for handle in handles {
        handle.join().unwrap();
    }
}

๐Ÿ–จ๏ธ Expected Output

[Thread 1] Started
[Thread 1] Cache miss. Acquiring write lock...
[Thread 1] Computing area...
[Thread 2] Started
[Thread 3] Started
[Thread 0] Started
[Thread 1] Cached area: 200
[Thread 1] Computed area: 200
[Thread 3] Read cached value: 200
[Thread 3] Computed area: 200
[Thread 0] Read cached value: 200
[Thread 0] Computed area: 200
[Thread 2] Read cached value: 200
[Thread 2] Computed area: 200

Note: Since the threads are spawned in a loop (for i in 0..4), their exact execution order is non-deterministic. The exact interleaving of logs may vary.

๐Ÿ‘ Pros:

  • Concurrent reads after caching (better than Mutex in read-heavy scenarios).
  • Thread-safe.

๐Ÿ‘Ž Cons:

  • A write lock blocks all readers during the cache-miss phase.

Comparison Table

ApproachUse CaseThread-SafeOverheadInvalidation
Getter MethodSimple, non-cached valuesโœ…NoneAlways recomputed
OnceCellImmutable, expensive computationsโœ…LowNot possible (one-and-done)
RefCellSingle-threaded mutable dataโŒModerateManual (replace(None))
MutexThread-safe, shared dataโœ…HighManual (lock & reset Option)
RwLockRead-heavy concurrent accessโœ…HighManual (write lock & reset)

Final Thoughts

Rust might not have Swift-like computed properties built into the language syntax, but it more than compensates with low-level control and flexible lazy/cached patterns. Whether you pick a simple method, an interior-mutability cache, or a multi-threadingโ€“friendly lock-based approach, Rust gives you a safe, explicit way to manage when and how expensive computations run.

  • Getter methods for no caching.
  • OnceLock (or OnceCell) for one-time lazy initialization on immutable data.
  • RefCell for single-threaded mutable caching with manual invalidation.
  • Mutex / RwLock for multi-threaded caching, balancing read concurrency and write locking.

Choose the pattern that aligns with your dataโ€™s mutability, concurrency, and performance needs. Rustโ€™s explicit nature means youโ€™re always in control of exactly when and how a property is computed, updated, or shared across threads.