Probabilities in Typing Judgments: Done Before?

(Second post today. I don’t know why but I feel guilty like I’m spamming everyone.)

So this brief post is actually my attempt to ask if something I was playing around with last night has been done before. It’s silly, but I got bored when we were hate watching a movie last night and I started writing down formation/typing rules for a lambda calculus that has probabilities attached to its typing judgments, \Gamma \vdash_{\rho} m : A, where the intended interpretation is that when evaluating m you have \rho probability of success in the evaluation. The idea was to have a typed lambda calculus that has a built in notion of the ability for a computation to fail for an unknown reason: running out of memory, hardware failure, etc. I was thinking about it because I once saw a colloquium (I don’t remember the speaker, sorry) that argued that we don’t take into account enough the underlying reliability of hardware when designing our algorithms.

So has that been done before? I found

but these seem to be a bit different than what I was thinking of, at least in terms of intended use, but are maybe a good start for thinking more about this.

Any pointers would be appreciated! I’d love to read a paper if it already exists or hear an argument why it’s a Really Bad Idea.

Advertisements

3 thoughts on “Probabilities in Typing Judgments: Done Before?

  1. At best tangentially related: you may want to look into what Ken Shan, Rob Zinkov, Jacques Carette, and (I think?) Oleg Kiselyov have been working on. Their work is on program transformations for probabilistic programming languages (i.e., where variables name distributions over values), but it could be helpful in manipulating terms of the sort you’re describing.

    More closely related: I’m not aware of any work on using probabilities in the way you suggest. Seems like you’d want to look into the QA or fault-tolerance literature and then try to codify that in your language (similar to how PL took over certain areas of the security literature). The big thing to solve is figuring out the motivation. You can’t really quantify the likelihood of failure, and can only estimate it empirically, so what benefit is there in percolating that information through terms? (Cf., complexity analysis: the motivation justifies not caring about the actual numbers)

    Alternatively, you could change what the probabilities mean while (probably) keeping the formalism idea you have in mind. For example, consider the extension of context-free grammars to probabilistic context-free grammars. If the probabilities just give the likelihood of being/having a particular well-typed term, you could use this to automatically generate programs with particular typing structures; or, if you lack principle types, then you can give a distribution over the possible types for a given (untyped) term.

    • Ah, okay, so what’s the benefit of percolating that information? Well, the context that was inspiring my meanderings was a colloquium I once saw a few years ago about fault tolerance and quantifying the difference in fault tolerance between different algorithms based upon the idea that certain primitives in hardware have a quantifiable % chance of returning a faulty result. Now, the actual discussion was a bit more complicated than succeed or fail but that was the starting point my brain was at a few nights ago. Basically I was just thinking of a really simple way to keep around that information through type checking because why not?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s