# brains versus the exponential

In my 2010 paper on Codd's self-replicating computer I estimated that it would take at least 1000 years for the machine to replicate. If we left it running we might expect it to complete in the year 3010.

But of course computing power is always increasing, so how soon should we expect it to happen?

Moore's law says that computing performance doubles every 18 months. So by now (18 months after the paper came out) the machine should take only 500 years to replicate, completing in 2511.

If Moore's law continues to hold, and we ignore any possible developments in software, then running the figures forward the best time to start would be in 2023 when it would take 2 years, giving the earliest expected completion date of

(We could keep the program running and move it onto faster computers each year but this would only save us a couple of years so it's hardly worth bothering, we might just as well wait for Moore's law to catch us up.)

By 2046 (I will be 70) the machine should replicate in 30 minutes, on computers 8 million times faster than today's.

It's interesting to speculate what algorithm advances might lead to self-replication happening earlier than 2025. The hashlife idea that Golly uses is extremely helpful. On stable, repeating structures like the static wiring of Codd's machine the algorithm excels, allowing it to make jumps of millions of timesteps in one go by re-using results from before.

Conceivably this could be beaten by an algorithm that was capable of analysing the function of each component in Codd's design, and making a symbolic representation of how it would work. Such a thing has been used by Heiner Marxen for analysing Turing machines (which are much like 1D cellular automata) in the search for Busy Beavers. He calls them Macro Machines. If someone manages to adapt Macro Machines to work on generic 2D cellular automata then all the work of Codd's machine could happen near-instantly, even on today's machines. Suddenly 1000 years looks a lot closer than before.

But of course computing power is always increasing, so how soon should we expect it to happen?

Moore's law says that computing performance doubles every 18 months. So by now (18 months after the paper came out) the machine should take only 500 years to replicate, completing in 2511.

year | duration (years) | completion date |

2010 | 1000 | 3010 |

2011.5 | 500 | 2511 |

2013 | 250 | 2263 |

2014.5 | 125 | 2140 |

2016 | 62.5 | 2079 |

2017.5 | 31.3 | 2049 |

2019 | 15.6 | 2035 |

2020.5 | 7.8 | 2028 |

2022 | 3.9 | 2026 |

2023.5 | 2.0 | 2025 |

2025 | 1.0 | 2026 |

If Moore's law continues to hold, and we ignore any possible developments in software, then running the figures forward the best time to start would be in 2023 when it would take 2 years, giving the earliest expected completion date of

**2025**.*Check back then to see if this came true!*(We could keep the program running and move it onto faster computers each year but this would only save us a couple of years so it's hardly worth bothering, we might just as well wait for Moore's law to catch us up.)

By 2046 (I will be 70) the machine should replicate in 30 minutes, on computers 8 million times faster than today's.

It's interesting to speculate what algorithm advances might lead to self-replication happening earlier than 2025. The hashlife idea that Golly uses is extremely helpful. On stable, repeating structures like the static wiring of Codd's machine the algorithm excels, allowing it to make jumps of millions of timesteps in one go by re-using results from before.

Conceivably this could be beaten by an algorithm that was capable of analysing the function of each component in Codd's design, and making a symbolic representation of how it would work. Such a thing has been used by Heiner Marxen for analysing Turing machines (which are much like 1D cellular automata) in the search for Busy Beavers. He calls them Macro Machines. If someone manages to adapt Macro Machines to work on generic 2D cellular automata then all the work of Codd's machine could happen near-instantly, even on today's machines. Suddenly 1000 years looks a lot closer than before.

This post was originally on LiveJournal.