A Case for A.I. for E.A.

Having been a practitioner of Enterprise Architecture for over a decade or so, with hundreds of projects, programs, and strategies under my belt; I began to take more measured approaches to each new initiative. Referring to Peter Drucker's famous quote "if you can't measure it, you can't improve it", this prevalent need from business to measure combined with the already inherent specificity of technology and engineering results in a sort of singularity that allows for both worlds to meet and communicate on a single plane: architecture road maps, a blueprint for the future. In most cases this is where E.A. practioners operate, providing context between business and technology.

A disclaimer, I am aware that are fundamental differences between machine learning, deep learning, and A.I.; for the purposes here, I'll refer to them all as synonymous, the differences are maturity and capability. Generally A.I. maturity starts with machine learning (a human teacher), to deep learning (autonomous), to true A.I. (self building and thinking).

if you can’t measure it, you can’t improve it
— Peter Drucker

Architecture road maps become a valuable artifact in almost any major enterprise; and good roadmaps show the true business benefits along with the technical, and they should be symbiotic, after all enterprise architecture is business architecture. What this resulted in were many different approaches and platforms: Rational System Architect, TOGAF, ITIL, SEI, Orbus, and many more. These are proven and profiting models, employed throughout the world in almost every large enterprise, but they all require a hefty amount of manual work.

Now a bit of forewarning, I'm no A.I. expert; I've been involved with, initiated, or have led a dozen or so machine learning or cognitive services POC's (Proof of Concepts), and have even seen a handful move to production, but I am by no means a Tensorflow expert, my mathematics skills are limited, innovative algorithm crafting is a bit beyond me. However, the platforms are competitive and becoming easier to use everyday. Every large tech company is striving to provide A.I and machine learning as an inherent value add to typically, cloud storage and compute; and it just so happens these are prerequisites if you need machine learning to scale, a good match.

I believe machine learning and A.I. is the inevitable future of Enterprise Architecture. Let's talk about what's measurable first.  Generally, road map E.A. activities generally work at a high level like this:

  1. Build a current state logical subsystem view, cataloging the business functions and capabilities with the logical system names.
  2. Build a technology catalog, what are those systems, what versions of what platform are they on? Where are they sitting in the network? What are their specs.
  3. Performance architecture, where are your utilization levels, what is struggling and what is wasting compute just sitting there warm at low levels.
  4. Build a target state based on conversations with leaders; in most cases these leaders are parroting what they've read somewhere else and most often refer to Gartner, Forrester, or certain publications for their strategic vision. Incorporate this in the target.
  5. Incorporate architectural patterns, standards, and principles in all levels of the road map.
  6. Build evolution's that match current tactical and strategic needs, sometimes requires pulling up previous estimations on roles and execution, hopefully from a project management repository or similar. This gives an idea of speed to market.
  7. Fin! (For Now) You have a roadmap to run off of, sounds easy; obviously there is more to it, a holistic understanding of how technologies work together, where you want your DevOps, how scaling will work, do the projects have viable ROIs and CBAs, etc.
  8. Repeat and maintain.

The key here is, where are things not measurable and quantifiable? If they don't exist, you need to make them so. And if these activities are truly quantifiable, why can't A.I. and machine learning create and maintain a road map?

I believe they can. The challenges, like any machine learning initiative, is data. The initial challenge is the first step, a clear view of the current systems. Fortunately, APM (Application Performance Management) platforms in addition to dependency management frameworks can provide the data. If an enterprise has a clear and functioning APM platform, the technical data is there as well as the utilization. So we have a starting point as well as benchmarks to strive for. It seems APM is a beneficial prerequisite, as it's also a living platform, consistently providing a historical viewpoint; and machine learning benefits on history, it has to learn what works and what doesn't.

The next part, and arguably most immature industry-wide, are business capabilities and functions; but then again, if an enterprise has a fully functioning BPM (Business Process Management) platform, it may be codified and historically accessible. Some E.A. platforms also include this domain effectively, but it has been my experience they end up being abandoned due to the nature of the platform requiring an intermediary outside of business.

Even better, imagine including E.A. platform language in micro-service architectures; applying a tag to each micro service to indicate it's usage in a larger context. Providing this level of functional detail could provide an inherent real-time E.A. repository that the A.I. platform could further work with.

So our recipe is half complete, a historical log of both technical and business functions; machine learning can begin. But machine learning works best with target examples, teaching the machine that "this is ideal". This is where patterns can come into play, and patterns are inherently useful for A.I. This is also where architectural styles come into play, a more subjective approach; maybe avoiding styles at this point can lead to options where styles can be applied in decision making, post-process. In addition common laws can act as, well laws, for the A.I. platform to adhere to or avoid or present levels of compliance.

Ideally this should lead to a situation where you can continually teach hypothetical target architectures to an A.I. and return confidence levels. This is where E.A. can provide subjective analysis to the decision, and through time, the machine can learn what best matches the organizational culture. And if it's quantifiable, it can be modeled; this is where a good machine learning platform could provide real-time architectural options with trade off analysis.

People don’t want to believe that technology is broken. Pharmaceuticals, robotics, artificial intelligence, nanotechnology - all these areas where the progress has been a lot more limited than people think. And the question is why.
— Peter Thiel

So it seems we may have the ingredients to get cooking. One thing I love about architecture as a profession is that it's logical, a good approach has nothing to do with the technical details, everything is brought up a level to common reason. The underlying catalyst for this is that a human being doesn't have enough time to apply architecture at every level, unless it's a single application. This is why we see "full-stack architects" as a prevailing trend now, compartmentalizing risk in an MVP fashion while putting further pressure on CxO's to provide E.A. strategy.

Large architectural organizations arise with the needs in place, and usually lose favor in terms of value because architecture isn't pushing revenue streams directly. With machine learning and enough data, the tools are there, if someone is ambitious enough to put it all together. Another challenge is the status-quo, architects and leaders define themselves on experience, a holistic knowledge and proven track record, an A.I. E.A. platform would challenge conventional leadership at all levels; objectively this would be a good thing, as many large enterprises run into trouble when they become top heavy, corporate histories are rife with these failings.

Another challenge is the amount of data available, big-data is a common buzzword, but I'm unaware of a large public repository containing COTS (Commercial Off the Shelf Software) platform requirements or programming library dependency chains; the only thing that comes to mind is GitHub, but even GitHub doesn't contain requirements for a deprecated COBOL 3rd party library or similar legacy systems.

I'm curious where the truly subjective parts of IT leadership are, what isn't measurable, what aren't I thinking of? Why aren't we here yet? Is anyone developing a usable E.A. A.I. platform? Feel free to let me know.

Jesse Myer