Via Luca Aceto: Erik Demaine, this year's Presburger Award winner, gives an inspiring talk about how and why he does his research, ranging from computer games to theoretical computer science to paperfolding art and back again to programmable matter. It's about 36 minutes long, but definitely worth the time.
A nice video, thank you for all your posts here.
I would like to ask a simple question, sorry it is out of context.
What is the model of computation implicitly assumed in the TCS literature? For example, in the article D. Eppstein "Finding the k shortest paths"?
Is it random-access machine (the wikipedia page is not really clear on this)
Your first mistake is the word "the". For computational complexity theory, I think the gold standard is still the Turing machine, but a large part of that subject is making up different models and comparing their power. For algorithm design, the RAM is a good first approximation, but details vary, and the pointer machine, real RAM, or word RAM might be more appropriate in some cases — in any case, in this area, one doesn't often pay a lot of attention to the model, just as many mathematicians who are not set theorists don't spend a lot of time thinking about the underlying set theoretic axiom system they're using. There are also more specialized models for algorithms that incorporate memory hierarchy costs or parallelism. For lower bounds on algorithmic problems, comparison trees are common, but the cell probe model (when it can be used) leads to more powerful bounds (not in terms of having bigger asymptotic growth, but in terms of being harder to work around with a more powerful type of algorithm).
For your specific question about one of my papers, I'm pretty sure I was thinking about it as a RAM algorithm, but I didn't end up using anything that couldn't be done on a pointer machine. Some of my later algorithms use bit manipulations that make the word RAM a better model.
Excellent answer - thanks very much.