A little late, but I noticed a lot of people been talking more about consciousness in AI since language models got better, like that guy from Google at some point and other people. Also, remembering a lot of discussions after Neurips last year when David Chalmers gave a keynote about what are the necessary conditions for consciousness and possible ways to test for some of those, and the others we don't really have an ability to test at this time.
Regardless of the "utility" of conscious AI, I think it's really interesting. Probably one of the most interesting things I could think of is exploring what makes our consciousness actually work and how to make one (other than having babies I suppose...different kind of interesting). However, it is one topic that really viscerally scares me. And no, not in the existential risk type of way that some people are focused on, for some good and some iffy reasons. I don't think most of those reasons particularly require consciousness anyway.
What I'm afraid of: Actually understanding the building blocks of consciousness feels like it would change my personal experience of life somehow. Like if we knew something about how conscious organisms experience time, and that we're able to change that. Or, that it's possible to construct situations where all outside interactions with the conscious entity seem like it's "one" entity but in fact is made of several conscious entities (maybe some having a worse time than others!). What if we just have an additional silent observer consciousness with us sharing our brains who "has no mouth and must scream". Or beyond this, many people's assumptions of an entirely material world might be disproven. We have a situation where we know if you change a brain, you can change the conscious experience of someone. But no matter how much physical info you have about a person's brain, you don't know what they're actually experiencing, because consciousness is inherently, definitionally subjective, and that's the hard problem of consciousness. I am of the view that the hard problem of consciousness is indeed a problem, that philosophical zombies are conceivable, etc. and haven't seen an argument yet to convince me otherwise.
So then we have a causal connection from the "real (material) universe" to what goes on in somebody's conscious experience. We have the "real world", and we got the whole "consciousness world" where people's qualia live I guess. That world: what rules govern it? Does it have rules? What can change? Different people have different experiences -- is that just because of different functional components? Many questions along these lines and probably more relevant ones undoubtedly have been asked by people a lot more deep into {the philosophical literature, spirituality, theology, neuroscience, etc.} than me.
But anyway, does the connection go in one direction? Are we so sure? And are the connections one-to-one? Is it possible to change someone's consciousness by somehow effecting a change in the "consciousness world" without necessarily affecting anything in the material world in the normal way to cause a change in consciousness? Probably not I guess but...why?
Anyway thanks for reading my blog :)
I'm working on a project to predict protein function descriptions in natural language, and focusing on evaluating the functional descriptions in an automated way. I'm trying to choose a good metric for this, starting my search with BLEU and other measures used in machine translation, and measures mentioned in this paper. If you know a lot about such measures, contact me! For this problem, compared to machine translation, there are differences in the assumptions of what constitutes a good match for a pair of descriptions, and how do we score a set of descriptions with a set of functions for a particular protein.
I've been working on function/fold/class discovery for proteins recently. I'm thinking about neural network-based clustering algorithms, though I know there are possibly much better ways to approach class discovery for proteins (probabilistic programming, energy based models). I want to learn more about those better approaches, but I still think it's worth exploring adapting the new techniques developed for unsupervised image classification for proteins just to see how they'd do.
Some related work that I think is interesting.
IsoRank is a global network alignment algorithm that can be used to detect functionally similar proteins between two interaction networks. It involves two main steps:
SCAN is a neural network-based clustering algorithm that has been used to classify images in an unsupervised way. This one involves three main steps:
Back to main page