Tag: AI

  • It’s inevitable – I need to write about AI

    It’s inevitable – I need to write about AI

    And true to my habits, it’s going to be from a “is it just me” perspective.

    I recently decided to make use of Claude (on a free plan so far) to act as my business advisor, since my so-called transition from my former (still being?) consultant approach to more of a peer is stalling. Inspired by the explosion of AI agent postings in all feeds everywhere, I thought I should start to build my own team in order to start a marketing campaign or something similar, maybe at least get some momentum in daily marketing work.

    I got some advice that I didn’t like so much from the bot, since it required me structuring my archives in order to feed the agent with information about me so that we should be somewhat qualitative. I then turned my question over to more of what I am (or should be) and how to act. The prompt looks like this:

    “With the background I presented initially and the file I just uploaded, I need some help with ideation. I’m struggling with two huge questions: What do I want to do? What could be a market where my experiences are valuable and I would have a good chance to really stick out and “stick” with prospective customers and what value should I bring to them? I have so hard to change route from the old-school consultant that I have been and maybe not been good enough as. I also have an idea about putting together a team of diversely skilled AI agents in order to get some momentum in an ideation process, but I don’t know if it’s worth the costs and effort.”

    Now things started to turn in more interesting directions. To this text, I had attached my profile or CV in English, and Claude told me that I possibly undersold myself and we started to elaborate on this, putting the whole agent project aside for a while, since this wasn’t of importance right now.

    I got a couple of alternatives of how to present myself and I picked one with, stated by Claude, more niche and possible higher risk, but speaking to the “curious and analytically minded” (that understand what I actually offer?).

    I reasoned that it’s worth taking the risk of not being understood by all. The ones who do are probably more worthy to approach anyway. And since I gave Claude the link to my website, it was hard to avoid a rebuild in the direction we had been taken on this.

    So, the web page was radically rebuilt from my earlier radical rebuild, and it felt okay.

    The next step was to dig into my LinkedIn profile, where I recently made a shift towards being a facilitator. The business for facilitators has recently started to erode, since AI tools can do a good portion of the work facilitators have done until now (this can be a bit exaggerated since very many organizations are far behind the ones that strive for real digitalization, but one cannot shy away from it entirely).

    This is where I was forced to halt and think things over. I was in an earlier stage given a tagline and a text for the “about” section on LI, and I took the tagline and pasted it as it was, looked okay. Then I asked Claude to give me a new draft for the about section, looked okay and I pasted it. Those two elements are the ones that are there now at the time for writing this, but what came next?

    I got a recommendation to write a post about me describing my skills and add this to the “Featured section on my LI page”, and so I asked Claude to give me a draft (at this point I was so accustomed to Claude doing stuff for me so it felt just natural). But – it didn’t feel good. The post is here, as-is:

    Most organizational problems aren’t people problems

    When a team underperforms, the diagnosis is almost always the same: wrong people, bad attitude, poor leadership.

    It’s usually wrong.

    In 25 years of working inside organizations — defense, healthcare, manufacturing, retail tech — I’ve seen the same pattern repeat across sectors, sizes, and cultures. The people are capable. Often experienced. Frequently aware that something isn’t working.

    What’s broken is the design around them.

    Three structural flaws show up more than any others:

    Unclear mandate. The team doesn’t have a shared, explicit understanding of what they’re actually responsible for — and what they’re not. So they absorb work that isn’t theirs, fail to deliver what is, and gradually lose confidence in their own judgment.

    Uncontrolled intake. Work arrives from everywhere, in every form, with no triage. The team is perpetually reactive. Nothing gets finished properly because something more urgent always appears. The list grows. Meetings multiply.

    Invisible technical debt. Infrastructure, tools, and processes that nobody owns and nobody has a plan to maintain. Everyone knows it’s a problem. Nobody has the mandate — or the budget line — to fix it.

    None of these are HR problems. They’re design problems. And they won’t be solved by a team-building day, a new manager, or a strategy offsite.

    They’re solved by looking at the structure honestly, naming what’s broken, and building interventions that hold.

    That’s what a business architect does.

    If you recognize this situation — in your team, your organization, or somewhere you’ve been — I’d be interested to hear which of the three it was.

    I just felt that this is maybe not me anyway – this is an opinion coming from an engine that I have fed with possible exaggerations and descriptions of who I have wanted to be somewhere in time, but does it align with what I want to be nowadays? I honestly can’t tell and I haven’t been able to tell for the last three or so decades, so how could an AI that I have known for just a couple of days and given a thin slice of myself know? The text is valid for sure, but this is more of an alias I have strived for in a period of my life. I have also noticed the many solopreneurs out here present an identity they want to be while actually working lower in the value chains, and it’s kind of sad.

    This made me realize that building myself is a job I must do myself and I need to carve out the real me, what I want to do, and why. This is in a time when delivering advice as a service is so (possibly temporarily) disrupted and so many are both searching for solutions and so many searching for the next gig. Being authentic is the most important thing, but which authenticity should I choose? And in what way does this make me stand out from the crowd? There are a dime a dozen like me fighting for being seen, is it really meaningful, or should I continue experimenting with Claude and see how far it can go?

  • Agentic AI and fear of the MS SharePoint phenomenon

    Agentic AI and fear of the MS SharePoint phenomenon

    I attended a webinar the other week where the topic was discussing the promise of using AI agents that you can use to ten-fold your capacity. This webinar was arranged by a host who always follows the same pattern in their webinars. They talk about a subject with a fairly high FOMO factor, or presented as “this is HUGE” — you are scared of missing out on something — and yes, at a given point in the seminar, you receive a heavily discounted offer to take a course on the subject being discussed. And by then, they have created so much FOMO that people are very inclined to sign up for a course like this where you get a certificate, and everyone is then happy and has suddenly got a feeling of having an expanded capability. That’s always all well and good, right?

    This forced me into deeper thinking about this concept of “Agentic AI.” And yes, it is fantastic for being able to automate things you do. Like, for example, what I am doing right now; recording a voice memo as a draft for a blog post in Swedish, letting an AI transcribe it to text and translate it to English. Then I can publish it, and in that way, I save a fair amount of writing time—which, strictly speaking, I shouldn’t stop doing completely, because then I might lose the ability to actually express myself in writing, even if I don’t believe that [will happen] in the short term.

    In the webinar as well as in public spaces, there is and has been for quite a while, a lot of talk about how, in these contexts, with the help of AI agents, one should be able to do programming. So, you are essentially replacing the developer with an AI agent and the ability to write code automatically, where they say that English is the new programming language. This is incredibly alluring, and it lowers the barriers for an incredible amount of people who then — given that you pay either, well, 20 or 100 USD a month, depending on which plan you choose to go with on this kind of AI service. In this case, it was about Claude, which is probably, right now at least, the service that perhaps has the best capability when it comes to Agentic AI-supported programming, or vibe coding or what you might call it.

    Where they also talk about building apps, they talk about updating web pages, perhaps even creating a web page very quickly and easily, which you even can let the AI agent publish on the web, and then everything is done. The webinar in question was not focused on security; they made that clear right from the start that “we are not talking about security here, we are talking about possibilities.” And sure, you can do that, but it is still the case that in practical application in everyday life, it is inevitably so that you must address the security question as one of the first things you do. Because you can’t just drop a webpage, for example, on the internet and let it be so full of holes and security gaps that it basically becomes unusable.

    The entire web is boiling with sniffing and searching for security leaks and holes and it is ongoing today with such speed and intensity that one might almost not understand it. So, you simply have to bring security along with you, all from the start.

    Another thing I have thought about is that if everyone starts writing code by speaking English with a prompt, then it will be the case that this will become code that — wherever it is implemented — no one reviews. Then the quality might end up being “so-so.” What is worse is that if you do this in an organisational context, then we will also have a future problem that has an analogy in a story from my own experience that I thought I should recount here:

    This was somewhere between, well, 2010 and 2015, when I was assigned as delivery manager for a SharePoint platform (among a slew of other (huge) platforms and systems) at one of Sweden’s largest government agencies. MS SharePoint was implemented as a global intranet solution for the entire organisation and everything was okay with delivery and, of course, too slow progress in development and maintenance.

    Note that this was before MS 365 and the cloud paradigm hit the world, so we lived in an on-premise universe.

    The phenomenon that started to emerge was that creative employees who ‘cracked the code’ of configuring SharePoint started building such clever pages for their sub-organisations, teams, or departments/units that they created fantastic tools for the business to use. In some cases, these became more or less critical as tools for the local entity to rely on.

    This was great, initially; everybody was enthusiastic and development was well anchored in the business. A problem arose though, when these code-cracking individuals suddenly left for another job or a new position, because then there was no one left who knew what they had done to create the tool. When someone later wanted to improve, change, or perhaps repair something that wasn’t quite right, there was no one to take over and solve the problems that arose for the organisation, or for the local department using the sometimes heavily configured SharePoint space.

    I had several quite tough conversations and discussions with our internal client, who felt that this was something we, as maintainers of the SharePoint platform, had to solve — that it was our responsibility to take care of business-caused ruptures due to natural changes in the organisation. I replied to the client that we have no idea what the business units are building and what needs it’s based upon. It’s simply not possible for us (as we are organised, based upon our agreed mission) to manage unknown code or unknown configurations that we don’t understand, especially since there is no documentation anywhere. There’s no foundation for these solutions within the application management team at all. We manage the standard platform in an agreed configuration exactly as it is when deployed. We find it very difficult to take responsibility for what users create in terms of user-centric configuration and content. We simply don’t know what the users are doing.

    It was considered upsetting that we had such a ‘nonchalant’ attitude towards our ”mission” (obviously a mission with a sliding definition based on mood-at-the-time). But it wasn’t about being anti-customer; it was about being able to take responsibility for something you’ve actually done. And we hadn’t created what the users had created.

    So, how does this relate to my opening story about AI agents? I see a risk where AI agents can create, well, ‘sloppy applications’ in a way and on a scale that could become very extensive. If users do this within an organisation, for the organisation, and then move on to a job elsewhere, the organisation is left with an application that is potentially very hard to refactor, configure, or change because no one knows what that employee actually did. It’s quite likely that such an employee is a ”single-point-of-failure-person” — someone who cracks the code much better than everyone else.

    This could have potentially dire consequences for businesses that suddenly find themselves with a whole string of possibly broken tools. I don’t know if we really want to see a future of ‘SharePoint all over the place,’ so to speak.

    I wonder what you think about this development and how you see it — how should we handle it going forward? I believe businesses should be very careful about allowing app development driven by individuals in a stochastic and spontaneous way across the organisation. We cannot simply replace IT professionals—the traditional coders—with AI agents that are fed with spoken requirements and needs, to bring tools to life. I think we will deeply regret it the day these systems require more thorough maintenance.