- Hello all,I did end up speaking to Jaan Tallinn (Dario Amodei joined me on the phone call), and we continued a sporadic exchange over email for the next couple of months. With his permission, I've consolidated both the notes from the phone conversation and the text of the emails into the attached document. It is a long conversation and the formatting is a bit inconsistent, but I thought I'd share it anyway in case people are interested.I may provide a summary of the content at a later date, but my key high-level takeaways are that
- I appreciated Jaan's thoughtfulness and willingness to engage in depth. It was an interesting exchange.
- I continue to disagree with the way that SIAI is thinking about the "Friendliness" problem.
- It seems to me that all the ways in which Jaan and I disagree on this topic have more to do with philosophy (how to quantify uncertainty; how to deal with conjunctions; how to act in consideration of low probabilities) and with social science-type intuitions (how would people likely use a particular sort of AI) than with computer science or programming (what properties has software usually had historically; which of these properties become incoherent/hard to imagine when applied to AGI)
This conversation is not as important to my view of SIAI as the conversation with its representatives, which I sent out previously and which I considered sufficient to reach a stance on SIAI for GiveWell's purposes; but since we did have the discussion and people might be interested, I'm sending it out.Best,HoldenOn Tue, May 3, 2011 at 3:30 PM, Jonah Sinick <jsinick2@...> wrote:On a different note, there was just a post on Less Wrong here http://lesswrong.com/lw/5il/siai_an_examination/ giving a very detailed analysis of SIAI's finances and activities over the past five years. Again, this doesn't address the issues which Holden (and myself in fact) see as key.
(But which may nevertheless be of interest to other donors)
JonahOn Sun, May 1, 2011 at 4:22 AM, Jonah Sinick <jsinick2@...> wrote:Hi Vipul,
While one may be able to gain some insight into the reasons for supporting a charity from talking to a large donors, I think that there's reason to think that in a given instance the probability of getting a useful answer is fairly low. One data point in this direction the Gates Foundation's grant for Japan release and their answers to GiveWell's questions. I've formed a general impression that in many (most?) cases large donors do not carefully analyze charities with a view toward optimizing the positive impact of their donations.
In the case of Peter Thiel I'll note that according the data available on Wikipedia
http://en.wikipedia.org/wiki/Peter_Thiel#Philanthropy
it seems like his donations to the said organizations are in the range of $1 million per year which is in the neighborhood of 1% of his annual earnings. The fact that the amount that he donated to them was such a low percentage of such a high income suggests lack of seriousness of purpose. Moreover, the sorts of assumptions under which marginal donations to SIAI would be of comparable expected utility to marginal donations to the Seasteading Institute are very special assumptions which seem unlikely to hold - this suggests that Thiel may be funding at least one of them without a view toward maximizing expected utility.
If you're interested in donating to one or more of organizations mentioned in the Breakthrough Philanthropy you might consider visiting them personally. Each of SIAI, SENS and the Seasteading Institute is in the San Francisco bay area which is a place that one might end up visiting for any number of reasons. I visited SIAI last December and was received hospitably.
JonahOn Sat, Apr 30, 2011 at 7:14 PM, Vipul Naik <vipul@...> wrote:Dear Holden/GiveWell,
Thanks a lot for this update. In the transcript, the SIAI people
mention two people, Peter Thiel (also mentioned in my earlier email)
and Jaan Tallin. You say that you already know about Peter Thiel. Does
this mean that you at GiveWell have already talked to him, or that you
have read enough of his public writings/speeches that you think you
have a fair idea of his reasons for supporting SIAI?
If you have a record of a conversation that you have had with Peter
Thiel (or somebody from his foundation) on these organizations, would
it be possible for you to make it public?
More generally, have you made public any of your records of
conversations with foundations or other large-scale donors whom you've
approached for information on the charities that they fund?
Thanks, and keep up the good work!
Vipul
* Quoting Holden Karnofsky who at 2011-04-30 14:38:02+0000 (Sat) wrote
> We've had a fair number of requests to evaluate the Singularity Institute
> for Artificial Intelligence, which Vipul mentioned in a recent email. This
> organization is pretty far outside the scope of what we normally look at, so
> we haven't done a formal review, but I did have a sit-down with
> representatives from this group in February, and have gotten their
> permission to publish a paraphrased transcript of our conversation. This
> document gives a good overview of my view on SIAI.
>
> In a nutshell, I have sympathy for SIAI's goals, but I do not believe that
> it currently has the (a) track record / credibility (b) room for more
> funding that it would have to in order to interest me in donating to it.
> There is one SIAI supporter/endorser I'm interested in speaking to to learn
> more (mentioned in the transcript). We're going to keep an eye on the group.
>
> Paraphrased transcript of my conversation with SIAI representatives:
> http://www.givewell.org/files/MiscCharities/SIAI/siai%202011%2002%20III.doc
>
> SIAI website: http://www.singinst.org