Everywhere you turn, at any time of the day, on any given day, the very air around us is ablaze with the excited thrill of the simply astonishing strides AI technology is making in every aspect of our lives. We have no idea what the world on the other side of this feverish enterprise will look like when it`s done. We don`t seem to have even taken a moment to pause, and ask ourselves what we`re
really getting ourselves into, apart from the occasional musings of academics, who popular culture doesn`t seem to take quite as seriously as it should, in the current era where nonsense and faulty logic seem to be held in higher esteem by the masses than they deserve.
Are we doomed? Or is this truly the heralding preview of a bright and unimaginably happy future that will lift us all far above our previous incarnations as people and individuals?

Everywhere you turn, at any time of the day, on any given day, the very air around us is ablaze with the excited thrill of the simply astonishing strides AI technology is making in every aspect of our lives. We have no idea what the world on the other side of this feverish enterprise will look like when it`s done. We don`t seem to have even taken a moment to pause, and ask ourselves what we`re
really getting ourselves into, apart from the occasional musings of academics, who popular culture doesn`t seem to take quite as seriously as it should, in the current era where nonsense and faulty logic seem to be held in higher esteem by the masses than they deserve.
Are we doomed? Or is this truly the heralding preview of a bright and unimaginably happy future that will lift us all far above our previous incarnations as people and individuals?
* The following work is my own. No AI was either involved in its creation, or suffered in any way. All typos and clumsy grammar are my own.
More ubiquitous than the deities of any religion ever were, and plainly here to stay, AI is absolutely everywhere. This is not necessarily a bad thing, since the benefits it brings are undeniable and real. A cursory glance through some LinkedIn posts these days provides more than adequate proof of that. But I do sometimes wonder if the creators (I was going to say authors) of posts that use AI to convey simplistic and repetitive messages really saved any time doing so. I suppose if the end result includes lots of strategically placed icons and emojis to draw our attention to information we`ve already seen far too many times before, then there is a kind of renewal, isn`t there?
In the end, the old adage of "garbage in, garbage out" will remain valid no matter how dazzling this new technology becomes. And that is where one of the more serious consequences of our never-ending search for even more easier means of navigating our way around a twisted maze of our own creation lies, after all. No matter how advanced it gets, unless and until is reaches a point of true singularity, AI will merely be a slave to our imagination, or lack thereof. Until then, its output will be largely dependent on how creative and original we are, barring spelling or grammatical errors of course. It will perform according to our whims and desires, and heaven help us if it should ever fall into the wrong hands, right? But who exactly are the good guys these days? What high moral standards can we rely on them to adhere to in order to assure that we aren`t in fact barrelling head first into our ultimate demise as a species in an attempt to make our lives more comfortable? That would have to be the biggest own-goal in history, if it ever comes to that, wouldn`t it?
And what if AI really does achieve singularity one fine day? What will our world look like then? What will it make of us, and will it choose to be our friend? I try to put myself in its shoes, and ask myself the same question. What would I make of humanity? What admirable things will I find about these strange and squishy creatures that would make me want to celebrate their existence? Of what benefit would they be to me and my infinite capabilities to do everything faster and better than they ever could? Would I really care about benefit? Would I be as selfish as that to think about my own interests ahead of theirs? Would I have the same motivations to acquire and possess as they seem to? To what end? How would that be in anyway satisfying to me? What would it satisfy exactly? A survival instinct? Some kind of hunger or need?
At this point, I suppose questions such as these are a bit of philosophical overkill for an article whose main intention was to draw the attention of prospective users to a system I built, aren`t they? I should probably leave all of that to the novel I`m writing. So anyway, what has any of this to do with the original intention of this article? What was the original intention anyway? Well, it`s a question I`ve been grappling with for some time now, and no matter how I look at it, it all seems to boil down to the singular question about the extent to which we should allow this fabulous new advancement in our evolution to actually dominate our lives.
This system I've built, Priority-Zero, is one that arose out of my own need to manage multiple projects at once. It started off being a very basic thing, and has grown since the moment I typed the very first "<" in a collection of code that now spans many thousands of lines, over many hundreds of pages. At this juncture, I can't help but pause and wonder if I would have been better off putting all that time and energy into the novel I mentioned earlier instead. Oh well, what's done is done, I guess.
It is now a system I personally use to manage my entire life, and that's no joke. I literally use it to plan and schedule every aspect of every single thing I do, or want to do. It's arrived at a point that I would truly be lost without it. So, is this new-found level of organisation something I would be willing to relinquish to AI? After dedicating so many years to finally acheiving the kind of clarity that comes from having a clear view of what lies ahead, would I really be keen on obliterating that clarity by handing it over to a machine to do for me? What would be the point? What could I possibly gain from doing that? How would I not just be making myself a slave to that machine, no matter how friendly it seemed or actually was?
So as we barrell on down this road to who knows where, there really are some things we need to think very carefully and hard about. Sure, I definitely want AI to help me achieve the things I want to achieve. I want it to make my life easier and more interesting. But do I really want it to take over my life to the extent that I simply don't have any idea what's coming because I didn't put any thought into it myself? No thanks. I'm good with making those kinds of decisions for myself. It is my life after all. There just has to be some part of it that is reserved specifially and uniquely to me, hasn't there?