(Second post today. I don’t know why but I feel guilty like I’m spamming everyone.)
So this brief post is actually my attempt to ask if something I was playing around with last night has been done before. It’s silly, but I got bored when we were hate watching a movie last night and I started writing down formation/typing rules for a lambda calculus that has probabilities attached to its typing judgments, , where the intended interpretation is that when evaluating you have probability of success in the evaluation. The idea was to have a typed lambda calculus that has a built in notion of the ability for a computation to fail for an unknown reason: running out of memory, hardware failure, etc. I was thinking about it because I once saw a colloquium (I don’t remember the speaker, sorry) that argued that we don’t take into account enough the underlying reliability of hardware when designing our algorithms.
So has that been done before? I found
but these seem to be a bit different than what I was thinking of, at least in terms of intended use, but are maybe a good start for thinking more about this.
Any pointers would be appreciated! I’d love to read a paper if it already exists or hear an argument why it’s a Really Bad Idea.