AI and Learning in Spain
What I've been up to intellectually
Some of you might wonder how I’ve been spending my time over here when I’m not traveling around and looking at rock outcrops. Well, during sabbaticals, I try to learn something new—a skill I can use and apply after the sabbatical. For example, I learned how to work with Arduino microcontrollers to control small motors, pumps, and other electrical equipment from my computer. That led to a really cool project where I installed water level sensors around campus. They record data every minute or so, and I’ve collected some fascinating data. But it all started by taking the time to learn how those microcontroller chips work—something I probably never would have done without that sabbatical.
This time around, I’ve been working on two very different but equally important things.
First, I’ve been trying to develop a small community of geologists in Europe that share my research interests. That’s why I’ve been meeting with other sedimentary geologists studying microbialites. I feel I’ve made real progress: I now have a group of five or six geologists I can communicate and potentially collaborate with.
But the skill I’ve probably focused on the most is learning how to develop “full-stack, open-source, databases.”
First, A Necessary Digression into Computer Jargon
Before I get ahead of myself, I should probably explain what I mean by “full-stack.”
Every time you open up Amazon, Craigslist, Facebook, Instagram, etc. you are using a full-stack database. It would be fair to say that all of modern computing is basically just full-stack database management coupled with powerful statistics. The algorithms that track you and the AI you may be using are no exceptions.
The “stack” is broken down into the back- and front-ends. The backend includes all of the data: think of spreadsheets, but many of them, all linked together. There is also computer code on the backend that allows you sort through and find data that you want or need. Consider all of the images on Amazon of the products you buy, their prices, their descriptions, shipping, etc. That is a massive amount of data that is just stored in the backend. The backend is stored in memory in giant computing centers and made available to you on your computer: the cloud. This is also called the server side, because it serves you the data.
Then there’s the frontend: the fancy webpage that you use, the buttons you click, the search entries you type. Every time you use interact with a button, click on a link, hit “search”, your frontend is sending a little bit of computer code to the backend. The backend uses that code to get you what you want and it serves it back to you.
So Why Are You Doing That In Spain?
This might sound horrendously dull to most of you, but to me it has been an amazing opportunity to enjoy quiet, focused time to learn something that (a) I think will make a real difference in my work and (b) I’m really curious about. The motivation is that I’ve been building a database to track microbialite textures, morphologies, and their chemistry —one that I hope can eventually be shared with the broader scientific community. Ideally, by the time I retire, the database will be developed enough that anyone can use it to answer important scientific questions.
This all started about 15 years ago, when my father-in-law, Bill Lamb taught me how to work with Microsoft Access to manage all of my research samples. The problem with Access, though, is that it doesn’t play well with the web. Sharing databases built in Access is clunky at best. So, I had to find a better way to share data online.
Enter “full stack development.”
I had one class in programming in college. I have crashed around with some programming every once in a while ever since. But computer code changes so fast and there are so many languages and variations, it’s impossible for me to keep up. I have to be a geologist, right? There’s no way I could become a full stack developer in four months. But I might be able to build my database anyway.
AI: A Very Useful Tool
One of my students, Shreya, taught me to use ChapGPT when coding. She used it like her personal assistant. Her very knowledgeable personal assistant. That’s how I’m using it. It’s a tool.
ChatGPT writes the code for both the front and back ends of this new database. To me, this is a perfect example of how AI can be used well. I can pose a broad question like, “How would I build a full-stack database on microbialites?” It guides me, gives me example code, and I can ask follow-up questions until I get something I can use. It never gets tired of me asking it questions and it’s always patient. It feels very much to me like having an experienced coder sitting next to me, that listens intently and tries to understand me, that gives me ideas and insights, but that could never do the whole job without me. Every time I work with Chat, I run up against its limitations and its weird, blatant stupidity.
In truth, I’m not writing most of the code—ChatGPT is. Writing code can be tedious, especially because even small syntax errors—like a misplaced comma or parenthesis—can break everything. ChatGPT usually gets those details right, so I can focus on the bigger picture: how users will interact with the database, how the web interface should behave, how to handle images, and so on. I know enough about how to read and interpret code to do these things, and I have learned a ton from these interactions.
It’s also why professional coders are anxious about AI—it can do much of the work they used to do. But for someone like me, who isn’t a programmer but understands enough to ask the right questions, it’s ideal.
And, after about 2 months of working on this, I have a demo working database that I can start to share with colleagues. I hope to have something fully functional in about 6 months and I’m seeking external funding to support students helping me write the more advanced code for the frontend.
My Opinion(s): ATVs and AI
The most valuable part of this process is that I now have a much deeper understanding of how databases work—how they’re built, how they operate. Nearly everything we do on the web relies on a database. They are absolutely fundamental, and they’re essential for advancing the sciences. I feel like I’ve learned something truly useful and gained insight into how digital infrastructure works in a way I never had before.
My eyes are also wide open to the ethical, social, and political implications of this technology. One analogy comes to mind for me that I feel fits here: ATVs.
Ethically, I believe that running ATVs all over trails for recreation is unethical. It burns fossil fuels, destroys ecosystems, creates noise and air pollution, and allows people to sit on their butts but feel like they’re all outdoorsy and shit. I feel similarly about jet skis. But, I think that there are very appropriate uses for ATVs: search and rescue; accessing distant places for limited-time, low-impact scientific research; helping disabled people interact with nature if they are mobility-limited.
AI is like an ATV: you can use it to create stupid art and music, with no creative input on your side. You can create deep-fakes and disinformation with it. You can have it write your essays for you. The carbon footprint of AI alone makes these types of uses unethical to me, let alone the more basic moral questions they raise. But, also like an ATV, AI is an effective tool for scientists and engineers that help them improve their work, make their science more accessible, and learn new ways of solving problems. 1
I did use ChatGPT to check for spelling and grammar issues in this post. You’ll notice that it missed at least one error, which I left in as an example.



Thanks for the comment. It has always been my M.O. to adopt new technologies so as not to get stale. If you're interested in what an AI chat looks like for me, here is an example of a session I did yesterday: https://chatgpt.com/share/68397449-ea30-8012-a4e2-cd69d3c92ab7
I'm actually very in over my head on this, but it feels pretty good. If you're interested, you can look at my webpage on this: https://tahickson.github.io/microbialbiosignatures.github.io/
One thing that I want to make sure I do is cast the net broadly to encompass as many microbial "biosignatures" as possible, including those in travertines. I plan on holding listening sessions at GSA and at a conference in Germany in the Fall.
The basic idea is that you can enter data for a "project," "citation" (or group of citations), and/or "collection" (like a museum). Then, from there, you add information about mega-, macro-, meso-, and microstructures using the classification of Grey and Awramik. However, I am very attuned to the fact that this classification will need expansion for other settings, textures, etc. Every megastructure should be geocoded for use in GIS (online, eventually). If you're interested, once I have it where I want it, I can share the temporary MS Access frontend with you and you can play with it a bit.