There are three “real” types of configuration we worry about when we get to the testing phase of the project. Let me start by saying that I’m not a validation expert. However, having been part of many projects, I will do my best to illustrate the basics of the three types of configuration that we have in our LIMS, ELN and LES projects.

First, the Misconception
The misconception is that “configuration” does not have programming in it or that the programming we do is minimal. This is absolutely not true. Based on the way that implementations are categorized, 99% of what we do falls under the category of “configuration.”

In past years, I tried to convince people in our industry to stop using this confusing terminology because it’s misleading to customers new to our industry. Having failed that, my latest efforts are to try to get the word out to all customers coming into our industry that “configuration” doesn’t merely mean that you press some buttons.

Granted, there actually are systems that customers can purchase that have no programming at all to them but customers new to our industry don’t know which those are. They don’t know to ask whether programming is required.

One Definition/Clarification
Below, I will talk about OQ, which is an Operational Qualification. It checks that something works as it’s supposed to. It’s a term used in the validated end of our world. However, even if you aren’t a validated environment, someone still has to do this testing. It won’t have a fancy name but it has to get done, regardless. I use the term OQ to help in the denotation of the level of testing to do but you all have to do it, regulated or not, or your system might not work.

The Three Types of Configuration

  1. Simple Lists – visual checks. When you put lists into your system and there are no actions associated to them, you probably won’t write a script for it. If you’re merely putting something in like a list of audit prompt phrases, rather than writing a script, it is more likely that you will have someone do a visual check to double-check the spelling.
  2. Data setup by pressing buttons but with complexity – testing the item. When you create data items such as tests in your system, where you might have been able to press lots of buttons to set it up but where there are actions associated, you are likely to do more than just have someone visually check it. If you were to write an OQ (Operational Qualification) script, the way you will write this might vary. You might want to at least list all of these items and give input and output information so that you can check that the end result is what was expected. In this case, you might not write much in the way of testing steps but you do probably run through each of these items and record that you have done so. Where some items are copies of others, it is sometimes allowed to have a note that indicates that the copied item does not need retesting, and this varies based on the situation.
  3. True programming – actual OQ scripts. Once, again, whether or not you’re regulated, if you have configuration that requires many, many lines of programming, you have to test it. In the regulated world, this is where you almost certainly write an OQ script that has many steps in it. In the non-regulated world, you still have to test all these steps but it just might not be recorded. The point is this – in systems that require programming for implementation, whether they have you use their proprietery programming tools, such as LIMS Basic or VGL, or whether you use something like C# or Java, you still have to test it. It’s still programming. We can argue about whether we should call it “configuration” or not but when you write code, you have to test it.*

* Some would argue that much of the code in these systems is just one line of code meant to fill a field and that that doesn’t need to be testing. That might be true. However, that is usually such a tiny portion of all the code in these systems that I think it’s grossly ridiculous to bring it up as if it’s the main issue.

Gloria Metrick
GeoMetrick Enterprises

3 Thoughts to “Validation and Configuration: The Misconception About the Three Types”

  1. lemoene

    Thank you Gloria, I have not had much experience to what I expect is mainly a Northern American practice of LIMS validation. Operational Qualification is quite a neat term, could sort under acceptance testing.

    Please would you comment on an Open Source perspective:

    * The program code is always available and quite rigorously versioned and diffs can quite easily be run in Github to lift edits between versions.

    * In our particular case, there is no programming during configuration, except for keying in formulas for calculated results. All set-up and configuration items are versioned and full audit trails kept available.

    * We probably only have 80% coverage at the moment, but the system is UI tested using Robotframework tests in the code. This can be recorded if necessary.

    * Functionally ISO standards, e.g. 17025, are adhered to.

    How should labs proceed from here to validation?

    1. Not being a validation person, I’m not really the one to ask.

      But I can give two different points of perspective to keep in-mind.

      First of all, merely from the programmer’s perspective, I will say this – if the code is available and if it gets changed, it HAS to be retested. Even if you add a comment, you introduce the possibility that you bump the “enter” button and change something without realizing it. This is where we get into discussions about how much regression testing to do. Once you touch it, you must figure out how much retesting to do before you hand it back to the customer. Hopefully, the end-customer using it can see it as a black box, even if the customer working as a super-user might have to know more details about the UAT (User Acceptance Testing).

      The other perspective is this – to the end user, it must work. To be able to work, it has to have been tested. Here, we get back to the issue that, if you change something, you must retest it to ensure that you didn’t break anything the end user needs to use. Ideally, the end user will never run into a bug, especially those introduced by system changes. The reality being that this is merely our ideal not the outcome. Still, it should be the goal of everyone involved that they let not a single bug past their portion of the process. By doing that, we greatly minimize the bugs in the final system.

      So, what this comes down to is that anyone writing code needs to be following SDLC (Software Development Life Cycle)/SLC (Software Life Cycle) that is strict-enough to ensure high quality for all users, regulated or not. Programs shouldn’t be allowed to be crummy just because they’re going to non-regulated areas. The issue of being non-regulated merely means the end documentation is slightly less. But the actual software development should remain the same.

      Note to the customers: for those people who find all this too complicated, then you should either get people on your team who will ensure you manage this in your implementation, and some of them MUST be internal to be able to manage your side of the process -OR- don’t select solutions that require programming. Those are the two choices as far as I’m concerned.

  2. lemoene

    Thank you Gloria. In our case the UI Robot tests are run after commits and code with failing tests are not merged for distribution. No doubt these will have to be expanded to satisfy validation requirements, but the test output can be screen captured if required by regulatory authorities.

    But like you say, nobody wants a bad system, least the devs having to deal with comebacks.

Comments are closed.