Automated Application Mapping
Customers expect the HP UCMDB to discover all of their software applications. But the OOTB solution requires specific discovery jobs (written if not available) or updates to specific files, prior to discovering the existence of new applications. Considering the number of competing vendors in a single technical silo, the pace at which software changes, and all the custom built (in-house) software - this becomes a product limitation and therefore a gap for customers.
Leverage Discovery provides an alternative approach. While we develop and maintain discovery packages to go after specific vendor-instrumented technologies, we also have general discovery packages that provide a level of visibility for the rest. Those packages employ techniques that provide coverage for software lacking a specific discovery job.
With our general discovery packages, the information necessary to qualify and group software does not come from static entries (e.g. Application Signature file, portNameToNumber file, user-defined parameters), rather it comes directly from the source - the discovered server.
Leverage Discovery has provided solutions for Data Center migrations, Disaster Recovery readiness, Security vulnerability, Application onboarding efforts, ITIL Disciplines, and more - simply by augmenting the HP UCMDB product with additional packages. We offer custom solutions through enhanced packages in many areas, including discovery, mapping, integrations, exports, imports, and reporting.
Software Outside the UCMDB Support Matrix
Total Cost of Ownership (TCO)
SOLUTIONS BUILT FROM PACKAGES
Calculating the current cost of an application is not as easy as it may seem. Asset Management solutions without direct feeds from discovery and mapping for the current states of all entities involved, cannot accurately calculate current costs. Sure, TCO requires financial data, but it also requires point-in-time context for CIs you wish to calculate TCO on, point-in-time infrastructure context for any supporting apps/services, relationships across sub components required for calculation, and a robust engine to walk through the inter-dependencies and perform calculations.
Consider a Windows server in "App X" running on a VM. Before the actual cost of "App X" can be calculated, all dependent costs must first be calculated for the contained server. Before you know the cost of that one server, all dependent cost points must be calculated - including total cost for the VM. Follow us down the rabbit hole. Let's say the cost model for a VM at your company is based on the total cost of the infrastructure plus the software licensing required for the VMware Cluster hosting that specific VM, divided by the number of VMs running in that cluster at that point in time. The cost can change drastically over a year, but each time the job runs - a new calculated cost is applied for the number of days covered. The cost can be more complex if the ESX servers in that VMware Cluster are covered under monitoring solutions, have service agreements, or have other cost points. All of which are additional data points that need calculated before everything is finally rolled back up for the final hardware cost of a single VM that was supporting a Windows server in App "X".
That's what this solution enables through the UCMDB tooling. Leverage Discovery provides the framework to automate the current cost of calculations for "collection" CIs like Business Applications. The costs are calculated on regular intervals in order to accurately reflect changes in infrastructure or rates, as time progresses. The rolled up cost is stored in a new CI in the UCMDB, along with all the corresponding cost points (name/amount), and then related to the corresponding CI. The "actual" calculated costs from our solution can be compared/contrasted to the forecast as tracked through a customer's Asset Management system.
There are several independent areas that apply to this topic:
Let's address those areas in reverse order.
Automatically updating an application map:
Pattern Based Models (PBMs) in the UCMDB are able to dynamically map datasets via patterns specified by TQL definitions. But the functionality is rarely leveraged to its functional extent because the UCMDB must first gather the qualifying data before it can be leveraged by a TQL. In this case, the obstacle blocking automated updates to an application, resides within the discovery manager - not the modeling manager. Leverage Discovery overcomes that obstacle with additional discovery, designed to capture dependencies and not just content.
Automatically creating an application map:
There are two native ways of grouping CIs with the UCDMB: code logic, and TQL definitions (Enrichments / Models). We've found that the best approach is using both methods. What are the "right" TQLs to be reused and leveraged in a large percentage of apps? Which pieces/parts do you need to ignore and which do you need to consider? You'll get faster answers and better automation from consultants who have done this before. We create and refine general TQLs to suite the customer's environment. Those general TQLs can be reused across applications.
What if you can't see all of the important pieces/parts? It's simple: you need to discover components before you can map them. It's worth highlighting a deliverable of our general discovery packages, which are used in this solution but are described more in the "Software Outside the UCMDB Support Matrix" section. With our general discovery packages, the code contains heuristics to automatically profile and group software. So those packages create additional collections that can be leveraged directly or refined for Technical Applications (a.k.a. supporting infrastructure). When our automated mapping of technical components is added into the mix, it allows the generation of TQLs that leverage lower-level context without requiring specific discovery jobs for everything running in a customer's datacenter.
Automatically associating a logical/business name to an application map:
This area is harder to automate since the name is a non-discoverable context. The majority of a datacenter can be discovered and mapped without requiring meetings with application teams. But this part usually does require customer interaction. The exception is with customers who have a large percentage of internally developed software applications, where one of their tools (build, source versioning, release, etc) has an official logical name associated with the discoverable pieces/parts. And the "automated" part comes from an integration to that tool.
For most customers, this step is not an option. And so historically, advocates of the top-down application mapping imply that since this part is usually not able to be automated - neither can the rest of it. But that's far from the truth. Even if a customer can't automate the association of discoverable components to the logical name, they can still leverage automation in the other two areas with creating and updating maps. Let us show you how.