Build Real Value with AI and Program Management

Transcript
All right. Thank you again for joining us. We are recording this session, and it will be emailed to you, I think in a few days. If it's, you know, for you to have access to, obviously, you get access to it. And then please feel free to use the Q and A function in Zoom meeting to submit any questions you may have. We've got quite a bit of content, but if there is time, I'd love to get to those as well. All right, so my name is Naeem Ahmed, and I'm a senior director at Interworks. I lead our strategy and solutions teams. I'm also a certified project management professional and a certified scrum master. I've been practicing this craft for fifteen years now. And during the heyday as a program manager, I had the opportunity to oversee very, very large implementations. We're talking about environments that ran up to fifty projects concurrently. And we supported everything from strategy to infrastructure, development projects and support, and obviously all structured with the PMO. So my picture probably gives it away. I live closer to the mountains and yes, this is in Colorado. My family and I, we certainly appreciate the outdoors and now that we're getting closer, fingers crossed, we get a good ski season this year. All right, so today we're discussing how you can build real value with program management with a focus on AI. With AI, there's a lot of talk about technology capabilities that are baked into the tools that we use every day. And that's all great. I think there is maybe less discussion on best ways to structure AI initiatives, especially if you wanna learn from experience and have an ability to scale up. So the talk today will emphasize a bit more on the best practices on structuring and some of the process elements, right? So how can you improve your chances of success with an AI type of a project? And if you're a program manager, project manager, some of the content may feel familiar. But again, the goal is to share learnings and approach, especially the way we do it at InnerWorks and know that all of it is baked into sort of what we call our delivery leadership methodology. So it's broken into three main areas. I think we'll spend just a little bit of time to talk about where PM is today, where it's headed, to talk about some trends. I will get into some use cases and the use cases will highlight the approach. That's gonna be the bulk of the conversation. We'll take a few minutes after that to talk about the types of projects that we're seeing in the AI space. And then we'll wrap up with some learnings on being an effective project manager. So, the saying goes as technology evolves, so does project management and its framework. So let's take a look at how we got here. So I found this interesting and hopefully you will too. This is taken actually from a PMI article that was published in twenty twenty four. The timeline you see on the screen highlights the evolution of some of the bigger updates to the project management framework. In the 1960s, companies really sort of appreciated, begun to understand the value of organizing and coordinating work. The '70s, the '80s and '90s, I think we added more processes around the framework. Things like everything that's related to teamwork and risk management and developing orchestrations to handle multiple projects. I do think some of the most exciting, and you can maybe argue relevant development came since the 2000s. We realized, you know, the one size does not fit all and started with the agile methodology. If you remember the agile manifesto came out around that time. We learned to work remotely, obviously during COVID, but we also benefited from learnings from global companies that have been doing it maybe with an offshore model. And the last one I think is we really sort of emphasized developing a strategy for projects, right? So it's like saying organization realize it's not enough just to get the job done on time or within budget, but instead tying outcomes to strategy or business goals. And that could be things like, hey, we want to increase our market growth. We want to maybe stand up a data observability capability in the organization, or we want to improve data literacy. And the good news for professionals like us is the demand for structure is not going away. So the really, you know, we're seeing more emphasis on asking for program managers. PMI forecasts that we'll need up to twenty five million project managers by two thousand and thirty. And in case you're wondering, there was a study done by ZEPENA that said that, you know, we have roughly four hundred and twenty five thousand project managers in the United States. So that's a really big distribution outside of the US. And according to Mortar Intelligence, yep, that's a clever name, the project management software market is expected to grow by sixty six percent to twelve billion by twenty twenty five. And also PMI states that sixty two percent project manager, management professionals, you may sort of experience this yourself, they work remotely. McKinsey reports that ninety percent of the companies in the US have embraced a hybrid model when it comes to project management work. So for me, this is all validation that project management is here to stay. All right, so what does that mean for leaders in the IT or data space? So we talked about number one already. Forbes says this is a new trend, but I think it's been around for a while. It's focusing on the bigger goal of maximizing value, not just project delivery. And agile for the portfolio really sort of speaks to our ability to react to changing business goals and be okay with making iterative progress. Number two, I think we completely agree. There is a shift which is driven by the complexity of projects. We certainly value technical skills. There is also a stronger emphasis on interpersonal skills or emotional skills, what we call IQ, EQ. And then lastly, I think we talked about this as well, which is, you know, remote work is here to stay and likely to grow. So combining traditional and adaptive methods will provide a good balance that's gonna give you both structure and flexibility. Certainly as the complexity of projects increase. And the other point around this is finding the right collaborative tools that make asynchronous work effective in your organization. All right, and this slide highlights a little bit of the influence of AI. So digital tools, real time monitoring, intelligence resource optimization, automating routine tasks. So, you know, we mentioned tools, right? So there are many, you know, things like Asana, Clarity, Atlassian, Basecamp. You know, really what we can do nowadays is we can synthesize really large volumes of project data. And that means you can get beyond sort of the typical metrics that we are used to such as schedule variance or cost performance index. Instead, we can get better at figuring out things like the risk occurrence rate, right? Or issue resolution time, or maybe focus more on stakeholder satisfaction. We can also better match right resources to the right projects, right, to improve our team structure. And really all of this means at the end of the day, we're trying to free up more time for the project and program manager so they can focus on the high value activities that can improve the overall success of our program. All right, so if it's all right, we're gonna all put our PM hats on and let's look at some use cases. Okay, so imagine that you are the program manager and you've been tasked by your leadership to develop an AI strategy, pilot a new technology, and then you may have done this already, but we're gonna pretend you're going to run your first data science project. So your approach might look quite different depending on the project type. And what we'll do is we'll look at it from a very much a project or program management perspective. And what I'll do, I think as we go through the slides is to make sure that I highlight some of the common patterns that we see that we believe are very critical obviously for being successful in each kind of project. All right, so use case number one. Let's build an AI strategy. So there's a good chance that you will uncover use cases, maybe many, and that's gonna lead to sort of a portfolio planning type of an exercise. And that means vet the use cases and prioritize the use cases. The essential decision points may include answering questions like, what are the core objectives your business is trying to achieve? And maybe even ask our AI initiatives integrated with the business planning, right? The regular business planning that happens quarterly. The other question is where can AI provide the greatest impact? You know, I think something we do well, but we could do better is how do you measure success? Are you looking for an ROI, for example? And what does that mean? Is it cost savings? Is it productivity gains? But being specific, it certainly will be necessary to come up with that strategy. And the other question that I find useful to ask is, is the organization ready? And that means capabilities, right? And that could be capabilities with the data assets, the technology stack, it could be the skill sets of your resources, it could be executive buy in. But really, think that the objective there is to identify gaps that can potentially be a roadblock to your progress. So our structure for the management, we restructure it as a management consulting engagement, essentially create a strategy and a roadmap from a vision. And you can do this with lots of discovery sessions, assessment of your current capabilities, and developing short term and long term implementation plans. From a program management perspective, obviously, think we'll need timelines, we'll need investment estimates as well as staffing plans. So this is how we describe the engagement on paper. What I find useful as a project manager is these are direct inputs to your charter and project management plan. There's an overview section on the top left corner that describes the engagement, the descriptions of what to expect during the engagement, the approach of the milestones of the project, and then you also have the outcomes and deliverables from the project on the bottom right. So what can you expect from the engagement? So typically it's a slide deck, right? It's a strategy roadmap. So it could be anywhere between, like from our experience, to sixty slides. You certainly have the business and the strategic alignment clearly articulated in the roadmap, sorry, in strategy brief. It provides a current state analysis. It's gonna talk about the AI strategy and roadmap, right? So this is very much sort of tying back to the elements that come up in the vision. And then we talked about use cases, so you may wanna highlight some. If you do that, I think you'll need an implementation plan if you've prioritized which use cases along with sort of the level of effort and staffing plan. We see lots of architecture diagrams usually for current state, future state. And then I think for AI specifically, risk and ethics assessment, many ways to do that, right? This is very much around your data, your privacy rules. And lastly, I think we certainly wanna see some of the metrics and how to make sure that you've got a good solid governance model as well as you propose a strategy. So from a project management perspective, know, know that, you know, I think, again, this is derived from running many, many, many strategy engagements. You're really dealing with a big population. So it could be anyone from C level to analyst level. So certainly be very flexible with calendar as you're setting up discovery or intake sessions. We find that it's extremely useful to do a kickoff and perhaps discovery on-site if possible, and then the rest could be remote. What's gonna help you are your skill sets with change management. So developing a stakeholder assessment metrics, finding out who are your change agents or some people that you may have to sort of convince a little bit more. Certainly do lots of group facilitation. We do that because I think we wanna make sure that everyone's hearing the same thing and you can do it by domains like specialty, maybe get different departments together. But I think it's interesting because people do speak up and they'll correct each other, which is very nice, especially if the information isn't complete or accurate. And we do see lots of moments. I think one of the biggest feedback we get when we do these types of engagements is sort of a realization that that group or that department or the organization have a head in chest to come together and just talk about current state and future state. So a great opportunity there. Other things to expect, expect lots of deep dives, right? And I do mean in technical areas. So you'll need subject matter experts to tag along with you. And at the end of the day, it's a fun exercise because you get a lot more than you asked for, which is fine because then you can just anything that's maybe not relevant right away can go to a parking lot. All right, so from a timeline perspective, that for us, we mix traditional and agile, the scope is somehow bound, We wanna make sure that we can get the engagement done within about eight weeks from beginning. And it works really well, especially if you sort of set the expectation upfront and give them sort of an idea of milestones throughout the project. I think we do a pretty good job in corralling everyone and making sure that we can wrap it up on time. All right, so let's get to use case number two. So you've done a strategy and imagine your manager is sort of asking you to evaluate, maybe for this instance, we'll say two different data platforms. And they're saying that they both have to be in the cloud, right? So I think the reality is that this is very much near and dear to us. We have lots of, specifically, these types of discussion today as people are looking to set up their data capabilities in the cloud. The typical pattern is you've got or have one set, one stack, and then you wanna compare that to a new stack. And then I think for, I'll throw some names just to sort of make this an example. Let's imagine we're looking at two competing platforms and the ones that come to mind are Snowflake and Databricks. From an AI perspective, I think we wanna look at some of the newer features, not necessarily new new, but it's been around for a bit, but certainly things that we're exploring. And I think with the data platforms, we see sort of discussion around, well, how good is it with natural language processing? Can I query my data directly? Potentially not go to a BI tool, but just be within the data platform and ask questions. The other one is, they call it document intelligence. It's really sort of doing a lot of the documentation as you are coding. And then the one that's popping up lately is, hey, we need the ability to integrate our data platform to external AI solutions, and you can do that with sort of an MCP server. So the question is, hey, we're doing an evaluation, we wanna know, hey, how good or do they even have an MCP server? So obviously, no doubt this is a very heavily technical types of engagement. Note with the bullets I've got on top, proof of concept, proof of value, proof of technology, minimal viable product, there are different flavors and they actually have different outcomes. So I think what I would say is, you know, there are subtleties, but they are purpose built. So we do wanna understand what they're for and make sure that we're picking the right one. And at the end of the day, if you needed to explain why do a POC, you know, why take the time, the reality is I think this is a good option to obviously experiment within your approach or capability. Can you connect all the dots? Is it working in the environment? Can it meet all the capabilities that you're asking for? And really, you get to make an informed decision at the very end, right? So identify sort of the issues or essentially validate a business case. And generally is, I'll say, a lower risk proposition, right? So it provides lots of information to understand what this may look like in a production environment. Very quickly, this is very similar to the last slide. Again, lots of information you can carry over to project charter, project management document with an overview, what to expect, deliverables and dependencies. And you may be doing this already, but we find this amazing when you're doing a proof of concept type of an engagement. This is great, it's like give them sort of a sense for what it might look like, right? So it's a target state. And what it does is it just promotes lots of discussions internally within the business team and the technical team, and talk about the different pieces of architecture. Deliverables are pretty straightforward, right? So you get technical feasibility assessment with the business value. We certainly see an implementation plan, right? So even if it's for the POC, maybe some understanding of the effort post POC with priority if you've got use cases, lots of architecture diagrams. And I think for any change, we certainly wanna emphasize, I think the change is simple or making the change is simple, but what takes a lot of time obviously is sort of adopting the change and that does come with very, very specific deliberate effort around change management and training. All right, from a cheat sheet perspective. So I think, again, what you get is, what we wanna, sorry, what we wanna start with is clear evaluation criteria, right? The second one is maybe obvious to everyone, but a neutral adjudicator for the evaluation piece, right? So you certainly need the vendors to come in and show off how they can satisfy your requirements. We're suggesting if you've got, if you can do it internally, great. Otherwise, if you wanna have a third party that's coming in to be your voice in evaluation, that's great. Expect heavy lifting from the tech teams. We do expect this to be a shorter engagement, usually six weeks, certainly need lots of subject matter experts and input from different groups that have ability to do data processing or data management, that kind of stuff. It's a hybrid methodology. We communicate a timeline, I think we said this already, six weeks or forty five days. I think we've learned this. I think sometimes, you know, it's good just to raise this up, and I think most people know it, but to understand what special compliance that may apply to them, whether it's BII or HIPAA or other types of requirements. And certainly expect to provide an estimate, and this may be part of your evaluation on sort of the cost of the new technology as you are getting ready post POC to sign contracts. So for us, the evaluation document, I'll just say it, it's the way we do it, we've got categories and subcategories, right? And with requirements that we can test, we like to keep the scoring between one and five, keep it simple. And then we do sort of a weighted calculations at the very end, that gives you sort of that objective view on how have the platforms performed against requirements. From a timeline perspective, similar to the last one, again, six weeks and a mix of traditional and agile. All right, so use case number three. So this is where your managers or leadership is saying, okay, let's get ready to develop an algorithm. This is, at the core, it's a software development example, right? That's really what we're And we're doing it specifically to build a machine learning model. So I think I'll say that, we'll touch on this at the very end too, but from a project or program management perspective, if you're like me that have been doing this for a long time, the skills and experience you already have and bring as a project manager are still key to successfully completing a data science project. A lot of it is very relevant and will apply here. What's different of course is the life cycle. The sequences and milestones can look different. There is a heavy emphasis with data science projects on experimentation. And I'll say that, you know, it's probably true that the outcomes are uncertain. And it's because the goalpost can move as you validate results. So the reality is at the beginning of the project, you cannot guarantee a functioning predictive model. There will be lots of research, lots of prototyping, goes without saying that data quality is key. What you may be developing, it may sound like a simple algorithm, but it could be an application feature, right? It could be sort of something you take and you may integrate this feature or this model into a much larger enterprise system. So I think what that matters is a lot of your development testing may be isolated, but it really has to be ready sort of for the two production environment. And examples, think, lots of automation tasks, there are things you can do on the data management side, like observability, profiling, so forth. Or in some cases we see customers that are sort of taking models and building into experiences like, for example, if you come to a website and shop. So what are the deliverables? So with a successful project, you get the business value documentation, right? So I think we need that, technical assets are everything from a trained model to source codes, to training data, to evaluation results, right? Training data matters, that's your baseline. From a documentation side, I think lots of architecture diagrams or documentation, your schemas, feature descriptions, data processing steps, evaluation documents, etcetera. And finally, I think you also need the necessary assets to understand what it's gonna take to deploy, monitor and maintain the model in production. All right, so I've got a site on key learnings and that's gonna take us a little bit into slides you haven't seen in use case one and two, but important for this one. May sound obvious, but really deserves a call out. So the first bullet, the thing is that the data needs to be relevant to your use case. And more than that, I think you need sort of an understanding of how complete the data is. Are there gaps in it, right? It could be historical gaps or it could be lopsided. It could be that you have one type of data that's dominant in your dataset and that's gonna need to be corrected. So you certainly need a well articulated business problem. That sounds like we get that, right? I mean, it's usually we can get someone to explain what they want. I think the key is, we certainly want to make sure that we can tie it back to some measurable impact, right? And some ways for us to have some ability to note that, yep, I think we've been successful with it. And the last bullet, it speaks to sort of the iterative nature of data science projects. It is experimental and just know that there will be lots of iterations. Okay, so what's useful in this case? And I've got a few slides here and I'll start with this one. This is, we saw a schedule before and this is sort of a planning document. I think for me, the key are making sure that you've got similar milestones. And with the milestones, it's really setting expectations. But the first one saying that, hey, it will be time intensive to essentially flesh out the business case. It might take some time to make sure that we can agree to what that use case is. And once we find the use case, then we have to do enough research to make sure that we can find the right model to go with it. The coding and testing, no doubt, lots of time, be ready to adapt, right? The third milestone, which is the analysis and optimization. So this is sort of where you determine how the model how the model will operate at scale. Can it handle the volume that you expect in prod? Will it crash, right? And are you happy with the execution time? All of that. The last phase, when this is complete, one, two and three are complete, it's presenting it back to a broader team and deciding if you have enough to move to sort of extensive testing and production phase. All right, so I think we've talked about use case definition are key to success. So what I've got here is, I've got a few suggestions, sorry, I should say that, and this is one of them. And what this is gonna do is sort of improve rigor of the discussion, I'll say that. The first one is a UML diagram. So I'm using an example of a simple one, it's a customer interacting with a bank ATM. The thing about this, it's easy to read, at least in my mind. So you've got clear swim lanes, as you can see on the slide. And then the swim lanes are defined by the different actors, right? So you can also clearly sort of describe what you intend each actor to be responsible for. The other thing you get is you get the primary use cases and I've highlighted or underscored them in red here, right? And then you get things like the relationships and the system boundaries. But the bottom line is this is a very effective type of documentation that I like to get early, especially with use case definitions so we can provide that to the design team and the engineering team. All right, the second one, this is called stepwise refinement process diagram, again, something that you may have already done. So really the question is why is this useful? And it's useful because what it does is it will take sort of your problem statement, right? And we'll call it the bigger problem, and it will break it down and break it down into smaller steps. And the steps are essentially more manageable. So what you get is you get certainly a lot of discussion, again, with the business team and the technical team. So lots of collaboration, get some clarity on the requirements. Clearly smaller steps are better for coding and testing and debugging. And really as you are, you'll find that as you code, there is an ability to reuse a lot of the code in the different boxes because they may be similar. Okay, so the third example I have is a use case diagram. And you've done this before, likely the main function is to identify what the project would deliver and what is outside of scope. Like before, I think it promotes a lot of collaboration between functional and technical teams. What it's doing here is I've underscored sort of the, I said what the project would deliver and those are the sort of the use cases that's identified within this. And this serves again, like the other documents, it's a great one for, again, for the business team to agree on and then hand that off to the design and engineering teams. All right, last one I have is called a paper prototype. Again, something you may have done before. Paper prototypes are amazing if you are having to build an app. An app is obviously heavy on sort of the user experience angle. So what something like this will do is it speeds up sort of essentially the design for projects. You can iterate, the paper diagram maybe takes an hour to build it, right? So you can discard, you can build a new one as necessary. But really what it does, I think it takes away or it prioritizes, I should say, prioritizes the discussion on functions of the app, right? As opposed to the visual details, because you know it's a paper model, not a final product, but hopefully the emphasis is let's make sure that we agree to how this is gonna operate. Again, also a very good one that you can potentially provide your stakeholders to get some feedback and maybe understand if there are any usability flaws. All right, so we talked about clarifying requirements, let's pivot to data for a second. Well, data ethics really are on everyone's mind. And I think we're used to using the term technical debt. The other term here is ethical debt. And just like technical debt, you can accumulate ethical debt. And it has consequences. The consequences are very much, if you've got concerns about your data with your prototype as an example, there's no ability to scale, right? I think the results aren't going to get better with more data. But really, think what we're seeing from there's enough literature online if you want to look it up, but I think we're seeing that if you have lapse in data ethics, there are real potential risk around data loss, certainly getting the wrong answer, but we're also seeing just companies or organizations that are clashing with regulators because there are privacy violations and that can be very expensive. So the reality is we need to start somewhere and at a minimum, I've got a short list on this slide, right? So I've got three items here. Item one, it's about understanding the, it's asking the question, do we have consent from our customers as we collect data? And that could be on websites or other ways. And there is an easy button for that, right? There's a pop up that says, Hey, do you consent to us collecting data? And if you check that box, you've got it. But I think that that's question number one, do we have consent from our customers? If not, then I think the question is, do we need to anonymize the dataset, right? Do we remove all the privacy information, names and addresses and date of births and things like that and anonymize the dataset? So the other question to ask, I think around data is also what's the minimum amount of data that you need to accomplish your goals? I think that's a very good rule. I think we don't wanna use it all just because it's there, but I think if you have an ability to sort of ask that question as you're planning, I think you'll get some interesting answers there. The other one is, are we following our data retention policies? So if data is no longer available to us, I think we need to be aware of that. And lastly, think this is probably the one we do most, honestly, is making sure that we're complying with privacy or consent laws, and there are many. I think we all heard about GDPR, there's the California Consumer Protection Act, the things like HIPAA, etcetera. But I think I find that in conversations even a few years ago, that's very much at the forefront. I think the other items are the items that we need to be paying more attention to. On item number two, the fairness of the data is really sort of understanding how your data set is, the makeup of your data set, right? And the way to do it is to profile the data, right? So you can do something like a simple distribution of key attributes, right? Attributes could be things like, it's helping us understand if it's skewed too much on maybe race or gender or age group, things like that. But doing a distribution and understanding where that falls and does that look like a good balance for the dataset is critical. Really, think with that, it's a little bit, you have to be more deliberate with this step, right? Make some policies around it, make some rules and make sure that it's part of the project plan. And after that, once you've got the policies or the rules enabled, I think there's also a need to make sure that you can audit for bias on a regular basis as well. The third one I think is a must, which is, oh, sorry, I think it's a bit of a repeat, but it's making sure that the ethics milestones are in the project plan. And nowadays, of course, I think you know it, there are specialty roles that are out there. If you've got bigger concerns around privacy, I think some of the titles are you can get an ethics research advisor or maybe an ethical AI ML specialist, but then know that your data privacy officer obviously can play a big role in this as well. All right, so VM cheat sheet for data science, the data models, algorithm building, know that the functional tasks are very critical. Obviously, we talked about how to flush it out and how to find use cases and refining use cases, which really means there's a heavy lift from the business team as well as the technical team as you embark on this project. The project can take anywhere from four to twelve weeks to build an algorithm and test it out properly. Certainly need various types of subject matter experts, technical, ethical, and so forth, along with the project manager. Certainly hybrid, right? If you do a little bit of waterfall ish at the very beginning, it's very iterative pretty much throughout. And lastly, I think we mentioned this, the key element, build your checklist on data ethics. All right, so we're gonna pivot a little bit and talk about some of the projects that we're seeing at Datamorks in the AI space. As expected, I mean, there is certainly sort of an uptick. I think everyone knows a little bit of what's coming and what we are and what's ahead. So it's very much around getting your data ready or heavily skewed, I should say, towards data assets, right? And that leads to making sure that your foundation is solid, that you can trust in. So what we're seeing of course are lots of data engineering, data architecture work, right? So this is very much around platform selection, we talked about it, do I do Azure, do I do AWS? Then do I do Snowflake? Do I do Databricks, other platforms? Lots of data modeling, lots of ETL, ELT types of work. We're also seeing a lot of governance work, right? And governance works do come in different flavors. It could be very sort of narrow or specialized, for example, like I need an observability tool, right? And observability is gonna give you sort of that real time visibility to ensure quality of your data. We're seeing catalog tools, which is very much streamlining sort of discovery management of data assets. We're also seeing semantic layers. And that's an interesting one because I think it's solved in different ways. But the concept is to get to a place where you've got sort of a unified business layer that for metrics as an example. Now the BI tool pilots, these are sort of activation projects, right? So this is making sure, especially with the Geni, AgenTic flavors to make sure it's actionable, right? So not just give a report, but take the next step. What do we do with it? Do we sort of alert someone? Do we sort of create an action flow? That kind of stuff. We're calling Allego, but really it is a mix of open source and cards products, and it's to create maybe a custom application that's a lot closer to your business workflow. So we're seeing some of that. And lastly, yes, this is very much ongoing with a high demand, it's called migration. Could be for data, it could also be for moving to SaaS platforms in the BI space. Okay, I've got a few minutes that I'll spend on data and tools and project tools. The takeaways of this, right? So automate as much as you can. So in our case, you've got an example on the left hand side, which takes data, we've got Salesforce, we've got a time entry system, we use Tableau for dashboarding, it's all connected, right? So what it does, it essentially makes sure that that data is flowing, there's jobs running in the background to make sure that we can refresh it. In this case, it's a burn report so we can sort of get a sense for how the project's doing and very, very helpful, obviously, as you're providing statuses to clients as well. On the right hand side, that's the data that's coming in from, I think it's the service desk. So what we have is essentially that are tickets that are coming in, what's been scheduled, what's in progress, what's completed, what's in the backlog, things like that. But again, stepping back, the idea is as much as you can automate and sync all the data sources, make it easier for your PMs, make it available on demand so they can just pull it up anytime they know that it's been refreshed. In our case, we refresh every fifteen minutes. I mean, you could do it more real time if you need it, but it's certainly useful for internal discussions as well as external statuses with clients. The space is crowded, lots of platforms out there. I think I said this before. For our purposes, I think what we really focused on is making sure that the platforms are all SaaS platforms, and we needed the ability to collaborate and we needed different roles with different permissions across the board. We're very big on, we've got a global footprint, we've got teams in Europe and APAC and so are our clients. And we needed the ability to do asynchronous work, make sure that we can support the different teams. And the other one is I think we very much needed this to be sort of for the personas that are using the tool for their purposes. So making sure that we could empower the stakeholders. So we're very active in getting feedback, ideas, concerns and risk and issues and all that stuff so we can improve. And all of that at the end of the day, mean, is all very helpful for the project manager. All right, so I think we're close to finishing here. So I've got just a slide, we've talked about trends, we've talked of use cases, we've talked about tools and AI. I think this is a little bit more, really a self reflection, I'll say, because I think this is very much my experience. So the way I look at it is we've got three pillars to be an effective project manager. And the pillars are certainly the experience of the knowledge you have as a project manager, the soft skills on the right side, and then what we call the subject matter expertise on the left side. And sometimes it's not necessarily to have a deep subject matter expertise, but subject matter knowledge, that's the SMK, right? And what that means is if you're a generalist and you understand concepts, I think that's also very useful. So, and the reality is, you know, I said I've been doing this for fifteen years and it's very true that one of the circle has been more dominant than others at times, right? I started as a business analyst, I became an architect, I did lots of coding and then I got into project management. So for me, I really had to study the craft. And what I find interesting today is that I'm spending more time today because of AI on the subject matter side, right? So it's understanding maybe the application of AI, right? And how is AI sort of changing the way we look at data, how we analyze data, and how is it impacting our projects? So the summary on the right, think is, again, what I find most useful for me, I'll leave this with you as well, is you can kind of see I've got foundation and foundation plus, right? So I've said this to you before and the experience you already have as a project manager or program manager is similarly set of the, you know, exactly what you need. And so I think you need that foundation. So I think what I found, again, me on the PM side, it's really sort of being, you know, I certainly did waterfall before, agile before, but really understanding subtleties of what an AI project looks like and what the iterations are and what it takes, you know, even when you go in production to maintain a model, right? And data ethics as an example. So those are the new things in the project management side. I think on the foundation on the soft skills, I think we can always grow, so I'm just going to leave it at that. There's more we can do. And really, other circle is where I think it's what I found useful and I think everyone is doing is sort of we read a lot, we read blogs, we read books, we take courses, we take workshops, all of that helps. And I think as a program manager, I think maybe understanding key concepts is where we wanna be. So we certainly wanna know what is machine learning, what are the elements of it, how do you maybe evaluate a model and so forth. So that's really it. So I just wanted to maybe end on this note and let you know that we all have the foundation we need. I think it's recognizing, okay, we need to sort of learn a little bit more because the technology is changing so rapidly, but we can get there. I think that's really it. So if it's all right, I'm gonna put this slide out, this is sort of the way you can get to me. You can see my email address is listed there. I'm very happy to connect with you and sort of answer any specific questions you may have. And with that, I'm gonna take a look at Q and A to see if we've got any questions at all. I don't see anything in the chat. Hey, Jenny, I'm not seeing the Q and A box. Are you seeing it by any chance? Hey, Naim, I don't see any questions. Think you got it all covered thoroughly. I think we're good. Appreciate it. All right, everyone, I'm happy to give you some time back. Really just wanna say thank you so much for joining and hopefully you found some of the content useful. If you'd like to certainly find me on LinkedIn or send me an email if you've got any questions. All right, thank you so much.

In this video, Nayeem Ahmed, Senior Director of Strategy & Solutions, Americas, covered how IT leaders can build real value with AI and program management. With over twenty years of industry experience, as well as certifications as a Project Management Professional and Scrum Master, Nayeem brought deep expertise in managing large-scale implementations and AI-driven initiatives. He discussed proven frameworks, best practices for structuring AI projects, real-world use cases, and strategies for evaluating and adopting emerging technologies. Nayeem’s unique insight helped viewers navigate trends, data ethics, automation, and successful project delivery in today’s dynamic IT environment.

InterWorks uses cookies to allow us to better understand how the site is used. By continuing to use this site, you consent to this policy. Review Policy OK

×

Interworks GmbH
Ratinger Straße 9
40213 Düsseldorf
Germany
Geschäftsführer: Mel Stephenson

Kontaktaufnahme: markus@interworks.eu
Telefon: +49 (0)211 5408 5301

Amtsgericht Düsseldorf HRB 79752
UstldNr: DE 313 353 072

×

Love our blog? You should see our emails. Sign up for our newsletter!