I’m working on implementing accessibility for individuals with disabilities into Ioquake3, and one of those solutions is text-to-speech. I’ve settled on espeak-ng, but its build process is a bit involved. Its fully documented here, and so (on Mac/Linux/BSDs/…) we’ll need to add several libraries to the build process, or try to find them on the system somehow. We’ll also need to run the autotools toolset during build. I’m (for now at least) targeting Linux/MacOS/BSDs, but I will most likely work on Windows at some point. What is the (correct) procedure for adding new libraries like this and getting them all to build and link properly?
I know I could just use dlopen/dlsym/dlclose to call out to the library but… That’s really, really messy and I’m hesitant to do that. It would make everything a lot easier but… Ugh. Should I do it that way for now to begin with?
Edit: in future I’ll end up working on calling out to external TTS systems, like Tolk for Windows screen readers and Speech Dispatcher for Linux, but that can get really messy. But if people think I should work on that instead of adding new libs, I can do that too.