Yes I agree and understand that it is not feasible to implement full-text search on the AFF search engine. But it is quite feasible (and very desirable) to let people search for AFF stories on Google, by letting Google index AFF archive.
I looked into sitemaps before posting my suggestion about detecting Googlebot via the user-agent. Sitemaps are not going to help. Believe me, google knows that the subdomains (e.g. comics.adult-fanfiction.org) exist. They know from links to the archive in posts on this forum, from DNS records, and from incoming links from elsewhere on the web. The issue is that Googlebot cannot get to any of the content. As you mentioned before, the problem is the hidden "form" submission(s). Googlebot will not submit "forms" and it won't store session information. This is what they are referring to when they say "dynamically generated pages" in the passage you quoted. In technical terms, this means that Google probably won't crawl anything that requires cookies and/or an HTTP POST request. A "POST request" is a form submission.
I checked how comics.adult-fanfiction.org works in this respect using Firefox's private browsing mode and Live HTTP headers addon. Basically, when a new visitor comes in that has never visited the site before, they have to get past the WARNING page by submitting a form (POST request) with their date of birth etc, after which they get and keep a cookie that identifies them to AFF server as having submitted said form. This makes sure they don't get asked to submit same WARNING form again for some time. This is the ONLY form (POST request) that is required to get to the stories and to navigate from chapter to chapter. Every time a user requests any page in the archives (e.g. a story chapter), they identify themselves by the cookie to your server, so your server knows they accepted and signed the Warning page. If the cookie is missing, they will see the warning page instead of the story. THIS is the mechanism that prevents Googlebot from seeing and indexing AFF archives. It cannot submit the WARNING form and won't keep cookies. So everytime it follows any link to the archive (e.g. a story), all it sees is the warning page.
So do not waste time with sitemaps because it won't help. What is necessary is a way to detect googlebot and let it browse the archive without checking its cookie (and therefore not displaying the WARNING page/form).
There are a ton of resources on the web about detecting googlebot, this one being the first result for PHP. I am not sure what the backend of AFF db interface is, but I am pretty sure someone out there has figured out something similar that you can use.
Please let me know what you think.