How Oracle Argus Safety Migrations Work

How Oracle Argus Safety Migrations Work


Okay, let’s get started. Hello everyone! Thank
you for your patience and welcome to the webinar titled How Oracle Argus Safety Migrations
Work, presented by Dr. Rodney Lemery, Vice-President of Safety and Pharmacovigilance at BioPharm
and Richard Wells, Head of Sales at Valiance. I am Eugene Sefanov, the Marketing Manager
at BioPharm and I will be going over some housekeeping items before I turn it over to
Rodney and Richard. During the presentation, all participants
will be in listen-only mode. However, you may submit questions to the speaker at any
time today by typing them in the chat feature which is located on the left side of your
screen. Please say your questions clearly and keep in mind other webinar participants
will not see your questions or comments. Nonetheless, your questions to the speakers will be addressed
as time allows towards the end of the presentation. If you still unanswered questions after the
webinar or would like to request additional information from BioPharm or Valiance, feel
free to visit our companies’ websites for contact information. We will also be giving
the contact information at the end of the presentation. As a reminder, today’s webinar is being recorded
and will be posted on BioPharm’s website within 24 hours. We will also be e-mailing you a
link to the recording and a PDF version of the presentation. This concludes our housekeeping items. I would
now like to turn the call over to Dr. Rodney Lemery and Richard Wells. Rodney, do you want to go ahead? Yeah. Good morning everyone or afternoon,
depending on where you are. Thank you for joining us today. Today we just want to cover an overview of
the data migration effort that our two companies have partnered to perform. We are going to
look at, in general, some of the issues that come up with data migration effort in our
industry. We are going to look at what we believe the industry is looking for as a potential
solution for these migration opportunities and then we are going to cover our specific
solution to the problems identified in the previous two slides. So, Richard, I think you are first. So, I
am going to just go to the next slide and let you start the discussion. Thank you, Rodney. Good afternoon or good morning everyone! Richard
Wells at Valiance Partners here. I head up our sales and business development initiative
and over the past several years I have had the opportunity to work closely with a number
of our clients with safety data migrations and Argus migrations specifically and that’s
given me the opportunity to really speak to them and briefed about the challenges that
they see in the industry and I have with me here today, Mark, who is one of our senior
migration consultants and Mark is one of our senior people in implementing or executing,
designing and executing these safety data and Argus migrations and I am hoping that
between the two of these, some of what we have heard and seen with our clients with
solutions we put in place have enabled us to outline to you an excellent migration solution
that we have put together with BioPharm to present to clients looking for solutions to
migrate to Argus. I will start off by just taking a look at
the state of industry as Rodney suggested. Research or studies have been done into understanding
what the state of the migration industry is and how companies address these challenges
and the types of challenges that they face when implementing these migration projects
and this particular table is looking specifically at GSP migrations. And what was found some
years ago was that at that point in time only 16% of migrations were being done successfully
without impact to budget or schedule. Now, over some years that number has improved and
it’s now in the range of something like 60% but when you think about it, 4 out of 10 migrations
going over budget and/or over schedule is a significant portion. So, we are obviously
seeing significant challenges in the industry and the chart here shows when people are surveyed,
the nature of what they view as the challenges that caused them to have issues with their
migration and if you just focus on the big bars at the bottom where the major problems
are, if you look at the numbers, I think that’s 1, 3, and 7, we are looking at things like
scoping issue, data quality, and less clarity on how to address data quality issues, we
see that the data and the requirements that people face are a significant challenge in
successful migrations. If you look at the other ones that are at
the top of the list, you are looking at things like expectations, lack of support from users,
the methodology. So, we are pointing to the fact that the methodology and the data are
what’s causing the problem. Now, this isn’t a surprise to us because frankly, when you
are having methodologies that are traditionally used — which I will get into in a moment
— the methodologies that are traditionally used are not designed to address the data
quality challenges and therefore the methodologies are for when it comes to finding a way to
implement migration successfully. Now, this is GxP migrations, if you will.
We are speaking specifically to Argus migrations or safety data migrations and when we look
at safety data migrations, we have additional challenges that cause us problems here. First
of all, when you have to make a migration, you are looking at some form of a tool. So,
what are we going to be using. Some companies will look at ETL tools and ETL tools are very
effective at database migrations, but they do not have built-in facilities to manage
the nuances of safety data migrations, if you will. It really becomes starting from
scratch looking at the database. Companies are also looking at E2B, for example.
There are tools in place that help them transfer E2B records, but most of the migrations that
we see, in fact, all the safety data migrations that we see have a requirement to migrate
non-E2B fields. So, you are now looking at E2B standards and some other form of tool
scripting in order to get the data to move where we need it to move. The methodologies
that are used rely only a manual validation approach and this is very time intensive and
this effects all GSP migrations, but it’s particularly valid here in the safety data
realm because you don’t really get more critical data than what’s going into safety data system.
So, the validation requirements are not going to be more stringent than they are here. This
is where there the most stringent requirement are. So, the labor intensive role is going
to be emphasized and have a great impact on both cost and schedule and when we talk about
methodology, we will see how that can magnify into a significant problem. Additionally, there are functional gaps that
may be identified during a migration and this requires us to go back and take a look at
how the systems are configured and we will speak to that as we go through the presentation
today as well. So, you have got approaches today based on custom scripting. They use
manual sampling and they become very repetitive and do not address the requirements we see. If you look at the data level and if you were
to pull back to the first issue they see, what are the challenges we have, data quality
and methodology. So, I think most people on the call will be able to comprehend that data
quality is an issue because you have got a system and the system shows how one side is
going into an Argus system and the other and they need designs to do that and in a way
there are some standard configurations, every customer system is completely different. So,
we are taking data from one system and trying to move it to another system and we are going
to have to transform that data along the way and understanding how that data is going to
sit in the target system and be viewed and used by users in that system. There has really
never been a good way to analyze those gaps effectively and therefore understand data
quality issues. So, you are going to have to sort of understand how that plays into
the data quality being at the top of the list when we looked at challenges. This is related to methodology because what
happens in today’s traditional way of implementing migrations is we use the sampling methodology
to test the migrations and we pull a sample of data in the first system and we find some
error and we remediate that error. We go back and pull up other samples from the next migration
we do, the test migration and because of the sample, the second one has error that wasn’t
in the first and you can understand that, well, if I couldn’t effectively address my
data quality issues before and I know I am running this test with those unidentified
data quality issues in place, well how many times am I going to have to repeat this sampling
methodology until I get to the point where I am comfortable and then take it one step
further and say “Okay, let’s say we get to the point where we are comfortable with our
testing and we go to validation” and your sampling approach in validation now also starts
to pull out those errors. Now, we have validation processes, documentation issues and it becomes
a whole different issue with how we remediate those errors and the hoops we have to jump
through, if you will, in order to get to the point where we are comfortable and ready to
go forward. So, this creates significant challenges in the migration environment. So, when we are speaking with our clients
and we talk to them about not just the requirements but about the kind of things they would like
in solutions today, there are a number of things that come to light. They are looking
for track record of success and experience in migrating safety data. Of course, safety
data migrations aren’t new. Companies have been doing it for quite some time. I think
the reason they are looking for this is because companies have done these migrations and suffered
through the pain of the issues I spoke to and aren’t necessarily looking for a company
who are doing it the way it had been done but I think another way of saying it is “Can
you show us solutions that are more effective than what you have got today?” They are looking
for someone with that kind of experience. They are looking to get out of the manual
and scripting approach and is there some for or out-of-the-box way of doing these migrations.
As I said before, the configurations of safety systems, whether it be or or Argus are somewhat
standard and although each customer will configure or customize it to a certain degree, it’s
somewhat standard and companies are aware of this. So, they are looking at “Okay, do
the standard configurations might be a little bit different. If I got a new kind of solution
that will enable me to leverage some kind of out-of-the-box configurations in order
to overcome issues that we had and seen in the past?” And then on the back end when we are testing
these migrations, it’s interesting if you ask a client “Well, okay, with the traditional
approach you might use some form of data sampling, tell me what’s the acceptable level of error
that you are willing to take in this data migration, the safety data migration?” and
of course the answer is always “Well, none” but sampling itself as a process is driven
by an acceptable level of error. To determine your sample size, one of the first things
you do is determine your acceptable level of error is going to be. If you put in an
acceptable level of error as zero, then you need a sample size of 100%. Traditionally
of course that wasn’t available. So, companies that had to do sampling were looking for a
way to get rid of that error level and then finally companies instinctively know that
this process must be user driven. Migrating data between two systems and even knowing
that you did it with 100% accuracy does not ensure that the data will show up and be usable
in the target system as the users are anticipating and companies are aware of this and they want
users involved in that process. The big challenge though is because of the sampling, if we get
into a scenario where we are repeatedly sampling, your burden on your users becomes impractical
and all of a sudden you are not only using a sampling system that isn’t testing at 100%,
you are not able to get the users involved to the level that you really need them to
be to ensure that the migration can be successful. Rodney, I think there are a couple of things
on the next couple of slides that perhaps you want to speak to. Sure. So, in the past two to three years that
BioPharm’s been working with the Valiance on these migration efforts, we have also found
what many researchers in the field of health information systems have found and that is
that there are some procedural concerns in these migration efforts as well as technological
concerns and Richard’s already alluded to a few of those. On the next slide I am giving
you some recent research from some health information key players in the industry who
have suggested that many times the clear agenda in these types of migration efforts is really
important — the scoping of the project, testing, all things procedure related — and then especially
— and we will talk about this in the next coming few slides — the functionality of
the application post the data migration. So, this is beyond just getting the information
from the source into the target, but it’s actually then making sure that the application
is fully functioning. Then, in addition to these types of scope and testing issues that
come up with data migration efforts, there are often times training and process re-engineering
concerns that have to be addressed. These could be new systems being implemented. So,
the source may be legacy system and the new target may be, for example, Argus Safety which
means the processes that are currently written to, say, empirically trace, would need to
be updated and the users would have to be retrained on the new system. So, this type
of proper training and process re-engineering for your team and staff are also key to the
success of these migration projects. This is where the partnership between our two companies
has been very successful in that we bring to the table a large amount of pharmacovigilance
safety expertise from a wide variety of companies — generics firms, devices, biologics and
pharma. So, we will go to the next slide. Okay. So, basically, our solution then is
an intensive partnership. We have branded Accel-Migrate and in this process what we
perform are detailed assessments which may include the data mapping which Richard will
talk about in a moment, high-level scoping, data dictionary concerns, data discovery concerns.
We do tend to offer these in a fixed fee engagement, but this would require the project to be very
well scoped which, if you remember in the previous slides, was one of the important
keys to success factors for these types of engagement. Richard, do you want to talk specifically
about the migration effort? I will speak to it on the next slide, Rodney. Okay. So, let’s go to the unique methodology
then. We will just jump right in. Okay. I want to now walk through the process
that BioPharm and Valiance have put in place for the Accel-Migrate solution. Now, in the traditional process I have spoken
to, we were relying on manual sampling for our testing approach and scripting and ETL
pools, for example, for the actual migrations. So, a couple of key things have changed in
that process. We obviously have a mapping process and a planning and requirement gathering
process that we walk through with BioPharm and our clients in order to make sure that
we are putting an initial mapping specification in place and then what we do next is we walk
through a pre-migration testing process. If you recall, one of the challenges I spoke
to earlier was the inability to address data quality issues and understand the data quality
issues and the data printing requirements early in the process. Valiance has automated
testing technology that we developed at this point in time to ask the question “Okay, if
I look at data in my source and apply transformation, will it fit into my target?” Now, this doesn’t
necessarily have to apply to the target fields because, say, we can apply assumptions to
that and help identify errors. For example, if we look at the major coding, the five main
dictionaries the companies require, we can use this process to test the consistency of
those and to test if values in the source code, in fact, are going to map to valid values
in the target that’s one of the biggest issues is make sure those things are all clean and
those things are all consistent as you move through the migration and errors in that can
lead to considerable data remediation efforts later. So, we can use this process with an
automated tool in order to identify those printing requirements early and really refine
the process and when you talk about the errors I mentioned earlier — the scope, the data
quality issues — this really helps overcome a lot of those issues and as Rodney mentioned
previously, this is one of the tools we use to help us to be able to put in place a fixed
price migration effort, that Rodney alluded to. From there we then go into a process where
we are setting up the migration and configuring the tools. The thing to point out here, as
I mentioned earlier, the methodology we are adopting here is leveraging the fact that
the safety data management solutions out there tends to have standard configurations. So,
we have configurations out of the box for our migration and our migration testing tool
and what we need to work on the configuration isn’t configuring the entire migration. It’s
configuring the delta between the standard configurations and the appliance configurations
on each of the source and target. Also, we are doing this in a configurable software
application. That software application can be qualified by the client and now, as we
go through our process and update those configurations, we are doing it in a qualified software interface,
if you will, and we do not have to be going back and redeveloping and revalidating scripts.
When you combine that reduced validation efforts, configurable software which is “more efficient”
to put together the script itself and the fact that we are using out of the box configurations,
we now have a much more cost-efficient way of getting that migration configured. Then we move on to our testing. Now, of course,
when you are testing a migration, you are testing it first of all in informal testing
environments when you are using the traditional process of sampling, as I said, you are pulling
a sample, you are testing it, you are finding error, you are repeating that process until
you are comfortable. Sometimes there may be a few iterations, sometimes it is a lot. Here,
at the highest level, we are running the test into the test environment; we are testing
100% of that with TRUCompare because we are testing 100%. After that first full run, we
now know all of the error and we can now remediate all of the error. So, when we do the second
dry run into that informal test environment, there is no error. Now, I speak very highly of it. Obviously,
the process is a little bit more back and forth in that as you do iterations in different
phases and things like that, but at the highest level you can understand what the concept
is, test it once, fix all the error, test for the second time to verify that that error
is fixed. Now, when we move through into our validation environment, we also use TRUCompare
to test 100%, but we already know we have remedied the error. So, our validation and
production and migrations now become an execution of that process and walking through that process
as opposed to uncovering through the error that needs to be remedied and gone through
the cumbersome validation process that might be involved. I do also want to point out one of the things
we mentioned earlier was the importance of use involvement. So, when I talk about that
100% testing, I am talking about the data level error that we can identify both pre
and post migration. That does not negate the importance of making sure that during those
dry runs, users are involved in that process. We usually have partial dry runs early on
and users are looking at the record of the case data in their target Argus system, they
are reviewing that data and providing that feedback and because we are testing 100%,
we don’t have to be looking repeatedly during an ongoing process of sampling. Their involvement
can be defined at the appropriate points and times and you can imagine how that helps stick
to your schedule and keep your budget on track as well. So, you can see how this entire process
now is linked to data quality. By addressing data quality and identifying those issues
early and then testing 100% of the data, we now have a methodology that supports that
and allows the process to be defined as opposed to being iterative and unable to be defined
as to sort of how many iterations you might have to go through to finalize it. Here we
have got a pre-defined testing element. Rodney, I am not sure if you wanted to speak
to that last bullet before we move to the next slide? Sure. I will just make a brief statement here
that as we have kind of already talked about the process re-engineering piece, it’s very
critical to the success of these types of projects and I think the bullet point here,
what we are really trying to focus you on here is that sometime that process re-engineering
pieces can affect the data migration mapping and we certainly as a team have seen that
over the past few years where once the users learn something new, for example, on the Argus
safety application, they may request a configuration to be added to codeless values or workflow
states in the system that needs to be manipulated or added which do have an impact on the data
migration itself and the mapping. So, that type of, again, close partnership and parallel
process re-engineering pieces can have a really big impact on the overall success of the migration
effort. And along the same line, again, BioPharm in
partnerships with Valiance, we really do bring to the table a large amount of practical experience
with a variety of different companies and types of companies which gives us exposure
to a lot of data that perhaps we wouldn’t have seen otherwise. Migration of device-centered
data, for example, can be very different from the migration effort of pharmaceutical type
data and that is certainly something that we have had to encounter, experience, correct
and refine over the past few years. Again, we have a methodology that we approach the
FOP rewriting and process re-engineering pieces of these types of projects so that it is done
in a structured manner, organized and, as Richard said, involving the end user community
is very important. And we have done some very heavy data migrations
and we have also done light data migrations. So, there have been a variety of different
types of projects over these past three years that we have been involved in and, again,
that just goes to the experience and breadth of our offering. And I wanted to focus here on this slide the
actual data mapping and process focus that we partner with and I am sure, Richard, you
will want to speak about the ins and outs of the mapping but I just wanted to, in this
slide, make sure that we underscore the importance of user involvement and really we couldn’t
stress more emphatically the need for the user community to actually functionally test
the application post the migration effort and, again, in a moment we will discuss the
actual process and, Richard, I think you will speak more to this but I definitely just wanted
to underscore that the process re-engineering piece is important but we found over the past
several years that the user involvement in the functional testing after one of the migration
runs has been executed become hyper critical to the acceptance of the migration effort
itself. Richard, maybe you want to elaborate. Yes, before I move on, the last thing I really
want to do is introduce the tools but I will take a moment to speak a little bit to the
mapping process because the methodology I spoke to before, the sort of migration process
deals with how we execute those mappings, configure the software and go through and
test things too on the migration but within that process one of the most critical points
is, as Rodney mentioned, the effectiveness of the mapping process and it’s sometimes
an overlooked area and you have to look at it from both the source and the target systems
and the way Valiance looks at this is we will first look at the target values to be populated,
go through those, make sure that we have where that data is coming from, the source system
and ensure that those mappings are put in place but then we will also go and make sure
that we are looking through the source system to ensure that all the data that needs to
be captured is actually being mapped somewhere and we have a process whereby we walk through
that with the client to make sure that is in place and this obviously, I think, cannot
be done without appropriate user involvement. When we do this initially, this is what will
help drive the pre-migration testing, but once that is done, the users are going to
have to again be involved in the process in remediating the issues that have been identified
during the pre-migration testing. It’s not enough to just have them look at it once and
assume everything is mapped the way they wanted. They really need to make sure they are looking
at it after the pre-migration testing as well and understand what it’s going to look like.
That’s the final key point in making sure that the scope and the requirements and the
mapping are effective and, if you recall, at the beginning the biggest issues were around
the scoping and requirements and this is the chance you have got to leverage a defined
mapping process along with pre-migration testing to really get that mapping specification right.
Not to say there won’t be tweaks later on as issues are uncovered, but this is the chance
to get it right and really set the stage for successful migration. Richard, maybe I can just underscore that
further with a practical example. So, over the past many migrations we have uncovered
some very interesting nuances in the data migration effort that we had not involved
the end user community would probably go unnoticed and the one that really sticks in my mind
the inability to gather the correct US License data into an NDA periodic report for Argus
Safety migrations. So, there are some very specific database fields that must be populated
in order for certain cases to be available to the NDA period report that made a report
inside of Argus and if we had not involved the end user community in that type of periodic
reporting testing, that the migration issue would not have come to light. So, again, just
kind of underscoring what Richard said, the importance of that type of user acceptance
type testing for these mapping execution runs. Thank you, Rodney. I will proceed on to a
brief overview of the tools and then I see there are some questions coming through, so
we look forward to an opportunity to answer those. One of the critical aspects of the approach
is the ability to use out-of-the-box tools TRUMigrate and TRUCompare. So, I will introduce
these very briefly. First of all, TRUMigrate, as the name implies, is the tool that actually
executes the migration. It will connect to the databases on the source and the target
systems. It is compatible with a wide variety of systems and what this means to safety data
is simply if you need to be pulling data from something as simple as spreadsheet, for example,
or maybe you have got data in an a choice system that you need to migrate and you need
to be pulling it from both systems, this enables you to use both of these as source systems
and supplemental systems for executing data and requirements. The user interface for configurations
which implies that, as I said, the process is more efficient. However, don’t let that
hide the fact that the mapping capability behind the product is very robust. The vast
majority of our migrations involve rules that are simply configured in the interface and
don’t require any behind-the-scenes scripting or coding and when I am saying vast majority,
I am talking 95%. So, we TRUMigrate as the application because
it’s not simply like any attaching itself to a database. As you will see in a few slides,
it’s compatible with non-safety data systems as well. We have ability to talk to accounting
management systems, compliance management systems and things like that. So, we have
all those behind-the-scenes applications. In this particular case, we are talking about
safety. We have the out-of-the-box configurations that I mentioned too. And I will also point
out that this solution, the TRUMigrate and the TRUCompare software has been adopted in
over 400 DHP migrations to date. So, it has a good and capable and stellar, if you will
track record of being used in these stringent environments. While TRUMigrate does bring
those efficiencies and its critical role in the process, really, it’s fair to say that
TRUCompare is the key to the process because it has the ability to do both that pre and
post migration testing that enables the methodology to be effective as it is. So, similar here; you have the ability to
connect to a wide array of sources. Over here we are not moving data. We are simply comparing
the data in the source to the data in the target or comparing it in the source through
an assumption, if you will, in the case of pre-migration testing and this is being done
across 100% of the record and 100% of the fields and we define counting comparisons
as well here in defined hash keys. Now, I think the benefit here, the 100% testing
benefit — I don’t know if that needs to be explained — it does have obviously with compliance
being as critical as it is, it allows you to minimize the number of test iterations,
as I said; you have got the compliance documentation that you need to verify with appropriate regulatory
bodies that the migration was undertaken in accordance with the specification. Pre-migration
testing identifies 40% to 60% of the data errors. You can imagine how that helps in
the specification. One other point is the maternal investment because maternal investment
is not always associated with migration tools, but as we get the opportunity to speak further
with some of you with regards to your specific migration requirements, I think you will be
pleasantly surprised at the efficiency that this can bring to the table and, even for
a modest scale of migration, how it can really make them cost effective. We do often get when we talk about TRUCompare,
about the types of errors that TRUCompare will identify. So, we can see here you have
got just a few examples. Mapping specification errors — it happens where an error is, for
example, a little bit ambiguous. So, someone’s configured it in the software tool and the
testing tool as best they can but it turns out to be inaccurate and the differences between
what’s configured and the testing tool and the migration tool during those errors before
and sort of remedy that and adapt these specifications as required. Again, errors going to happen
when you are configuring the migration tools. So, those errors will also be identified during
the testing process. When we go through the process of data cleansing,
users may fill out spreadsheets, for example, to identify issues that need to be remedied
but they may fill this out in error or use values that are not valid in the destination
system. Again, we are configuring TRUCompare to test the supplemental data sources and
those values and transformations as well. So, if those kind of errors occur, that will
also be picked up. Certain data level errors if you are leaving
certain fields with a null value or mandatory fields are being omitted, if certain values
have been truncated beyond what was anticipated due to configurations in this Argus system,
all these types of things will also be adopted. Then there are process errors and Rodney alluded
to one and we have seen these kind of errors also where users have had issues in the product
naming tables and a critical part of the process which was validated but they failed to update
product naming values in the target system and what would have happened during sampling
is that error was highly unlikely to have been uncovered but then as further cases were
added, those cases would have validated their analysis capabilities, their historical analysis
on that data and TRUCompare was able to pick that up, the process that was able to be remediated
and effectively migrated, validated and taken through the production. We also get asked about the pre-migration
testing. So, just a couple of simple examples to help identify what the pre-migration testing
involves. For example, if you have got mandatory fields in your target, pre-migration testing
will help identify areas where there is no value that you have got and that’s coming
from your source end of those values. If you have got a data that needs to be truncated
in the target, that is too long in the source, you will be able to identify those types of
errors and bring those out into play beforehand. It will help you with your data duplication.
We can configure it to define ID rules on the data and identify if we have data duplication
issues before we go through the migration process. And then critically with the product
naming tables, the MedDRA and the WHO, we can configure rules to test whether or not
it has got all the values that we would be migrating in from that source, values in accordance
with those lists with the target and those things are going to be tested during the pre-migration
testing phase. As I said in summary, this is a proven solution.
BioPharm and Valiance have worked successfully for many clients now with the Accel-Migration
Solution to bring them into play. I will pull your attention to the second and third quotes
where if you go back to what we said about questions of>, our acceptable
level of error is zero, I think these are a couple of examples from clients that really
have used this process and these tools and really found it has not only helped with efficient
migration but successful migration, realizing that under traditional approaches issues would
have gone through into their production environment. And when we talk about requirements, we are
happy to work with you and arrange to have discussions with customers as appropriate. I did mention before also that our software
and tools can be used across a wide array of systems. I pointed out here because when
you are looking at migration solutions, there are also synergies to be had if you are looking
have other requirements. There are synergies to be able to be using tools and the methodology
in other GXT environment should that be appropriate. That concludes my speaking portion of it.
Rodney, Eugene, I will hand it to you if you want to sum things up and then we can see
what looks like will be an interactive Q&A session. Sure. Thank you so much, Richard and Rodney.
I will go ahead and ask the questions that were presented to us in the webinar. Feel
free to ask your questions as well in the chat feature. Just them in and we will try
to get to them as many as possible and I will ask Richard and Rodney to go ahead and answer
them as you feel necessary. So, the first question is “Does the migration
concern only safety data or also configurations?” Richard, do you want me to take that? Yes, Rodney, I think that’s probably … Sure. Yes, that is true. We have not performed migrations
where we migrated the actual configuration data. This is usually because, as we stated
in this webinar, the process re-engineering piece is kind of ongoing which means that
configurations may need to be updated or changed or at least addressed and reviewed prior or
in parallel rather to the data migration effort happening. So, today all of our migration
have included a manual configuration or re-configuration depending on the needs of the client. Therefore,
the automated configuration migration has thus far been out of scope,but I should mention
and, Richard, please correct me if I am wrong, the tool itself could absolutely be used to
migrate the configuration data should that become a requirement in the future. Technically speaking, yeah, that was correct.
It could be done, but, as you said, it’s not typically the recommended process. Okay, great. Thank you. I have the next question.
I think this is, I guess, for Rodney. “The solution that you are offering does not involve
Oracle at all? The reason I ask this is because previously we had Oracle come in and assist
with both validation and migration.” Yes, that is true. We are not proposing a
partnership with Oracle Corporation although we are Gold Partners with Oracle. So, we do
have access to information and resources, etc. with that partnership, but that is correct,
we do this migration effort completely independent of the Oracle group. Great. “How much time would you plan for the
entire migration from start of the project such as from to the final migration and being
able to use Argus as a running system?” Yeah, without further information that’s obviously
a fairly difficult question to give a specific response to. I would say that when we are
dealing with modest case volumes, let’s say, 500 to 5000, you could expect anywhere from
3 to 6 months for the migration portion of it. Again, there is a caveat there in that
what are potential supplemental sources, what’s the nature of the configuration. There are
things that we don’t know that could offer opportunity to a little or provide the necessity
to lengthen that a little. When you get the larger scale migrations, it’s not uncommon
for us to be involved in projects that are running for 6 to 12 months and, in fact, at
the moment we have projects that have been going for over a year when you have got sort
of outside hundreds of thousands of cases and you are looking at multiple phases and
iterations of the migrations. So, I would highly suggest that you have to think about
the case volumes and the requirements that you might have and then contact either Valiance
or BioPharm; we can speak further and very quickly get you a more concrete idea of what
the actual time might look like in your case. Great. The next question is “Since Japan data
is very different from raw data, such as PMDA, DTD, for example, there seems to be conflict
during the migration. This is a broad question basically. I am asking you if there is Japan
migration consideration going from another database to Argus J?” Actually, I will ask Mark Hughes to respond
to that. If you recall, I did introduce him earlier as one of our senior safety migration
consultants and he has quite a bit of experience working on Japanese migration front. There actually are considerations moving from
that data source to Argus. Both BioPharm and Valiance have had extensive experience working
with Japanese sources like this. In fact, we have a project right now where we have
a team over in Japan working on this. So, we definitely have experience working with
this sort of information. It’s definitely and apples to oranges sort of migration, but
it’s not something that we haven’t handled before. Great. Thank you. The next question is “We
are currently using Empiric 4.3 and have multiple instances and when we migrated previously,
we had to migrate the instances individually with running through a lot of different steps.
Are you saying that this solution is completely automated?” Yeah, depending on the multiple instances,
a lot of times if we have multiple sources that are completely different from each other,
then yeah, it does make sense to have multiple ways just because the data leaks to that.
So, in other words, if the data is completely separate and completely different, then they
might have different requirements and if there are different requirements from the different
sources, then, yes, that would lead to more or less different mini migrations at the same
time, but at the source of a similar nature we could have a lot of reusability on whatever
configurations that we create for the separate sources. So, it really depends on the nature
of the separate sources. If they are close to nature, then sure we can have a lot of
reusability there. If not, then they would require more fine tuning of the configurations
to do the different migrations. As far as fully automated, our tools are fully automated,
the process that we go through is fully automated. However, there is definitely a give and take
during the dry run. So, there will be some back and forth. During configuration time
there is going to be some give and take that the automation won’t be taking place but basically
at the end of the project, the configuration should be in a place that we can take advantage
of those fully automated capabilities to basically reduce run time and things like that. Yeah, I will add to that just a little bit,
something I perhaps omitted when I was speaking of tools that the software is automated and
obviously one of the big benefits is once you configure the migration and the testing,
when you had to do it again for the second dry run, it’s already configured and you may
be making certain changes but you just have to push that button again. That’s the key
driver of the efficiency, but, as Mark said, that does not remove the necessity for user
involvement. As I said right at the beginning of the presentation, one of the big issues,
one of the things that complies wanted was a user driven process. So, you cannot automate
the process to the extent that you negate that back and forth. During the dry run there
is going to be interacting with users as they make sure that not only has the data migrated
according to the specs, but they can use it and see it in the target system in the way
they need to do. Great. Thank you. The next question is “I
do understand that you are doing a 100% check of the migrated data. Will there be enough
documentation evidence to pass an FDA inspection to ensure that all data from the old system
such as was migrated into Argus?” I will start that off and Mark supplements
it if you would like. There are a couple of things to be had here. First of all, on the
100% testing, the TRUCompare tool provides the output necessary, if you will, to verify
that the source to target transformations were implemented in accordance with the specification.
However, that in and of itself does not qualify, I would say, as the little documentation that
is sufficient. So, the migration process includes the development of migration qualification
protocol and a summary report. Now, the migration qualification protocol is out of the box documentation
that comes with Valiance software and that defines or walks you through the actual validation
process and that will include scripts for testing the software to ensure that it’s qualified
and working as it should, that it’s configured as it should be and then executing the software
and testing the actual migration. Those things in combination provide you with the documentation
required and Valiance-BioPharm executed migrations have been audited by the FDA and there have
been no deficiencies found on that front. Mark, do you have anything to add to that
one? No, you have it. Great. Thank you. The next question is “For
the actual migration after the testing and validation is done, how long will the users
be offline during switching from one database to the other as during testing and validating
the old system will be in use and the new data will be added.” Yeah, this is always one of the biggest challenges,
the least of which is because the users have access to this data, the most of which. So,
the short answer is that it depends on a lot of different factors. There are certain laws
that we have to sort of work around. It takes time to put records into Oracle. So, we have
a bunch of different strategies that can help that. We can have multi-processing, we could
throw more resources at the jobs when they are running. The reality is it depends on
how much data we are going to move. It depends on the resources that we have to extract and
insert the data. Typically, though I will tell you that we try to obviously minimize
it and typically we try to get it done over a weekend. That’s usually the most acceptable
thing for most clients. In some cases, for modest engagements we could do it over one
day, one night. It really depends on a lot of different factors and we will tell you
that probably when we start looking at a job, we start looking at what we have to do is
probably the most important thing that we try to figure out because there is a lot of
issues that you need to factor in for down time. If we need to get resources, we deal
with it. If we need to rethink our strategy, if we need to come up with an interesting
way to sort of pre-process the data so that things get done on a timely manner. So, this
is important consideration I would think about very early on in the process. Great. Thank you. I am going to jump around
a little bit just because we are running short on time. So, if we haven’t answered your question,
we will try to get back to you after the webinar. Next question is “Does the migration process
also include customizing Oracle Argus target system to match the legacy tree system customization?” Richard, I can take that. Sure. Thank you, Rodney. Yes, I think in general I am going to answer
this in a general way. So, regardless of what your source system is, the configuration or
customization that you have implemented in that source system need to be accommodated
in some way in the target system. A while back did a rather large migration effort of
a European pharmacovigilance solution called PV Works. Argus and those had very different
configurations than what were traditionally available in Argus application and of course
we had to work with the end user community to make sure that all of those configurations
found their way into the target Argus app. So, regardless of what your source or target
are, I think that the principle that the question is asked from is an important one to make
sure that we accomplish in the migration effort. Great. Thank you. I think we will ask one
more question and then we will close up. “Can you describe the ETL behind Accel-Migrate?” Yes, I think if we look at … you haven’t
found the right slide here … We did talk to the ETL tools. We don’t often refer to
it as an ETL tool, but TRUMigrate is the software that is used to actually connect the source
and target systems. I didn’t sit through this process and the benefits of using the configuration
feature. I am not sure if there is a specific question you are looking to answer, but TRUMigrate,
we have pre-configurations for the tool that’s configured for the standard configurations
of the standard safety systems out there today. The tool will connect to the databases for
the source and target systems. It will read in the data schema and provide an interface
for users to configure and for users to configure… well, not the users. I will be clear, I am
not talking about business users. I should say to migrate users, the IT people involved
in configuring the migration, to configure the mapping rules and the transformations
and the interface for doing that. There is an ability to use scripts within a tool if
you wish to do so but, as I said, the vast majority of clients doesn’t need to do that.
They can configure those rules, they can connect the supplementing systems and configure the
rules for merging data from two source systems to represent one unified record in the target.
TRUMigrate will then, when those configurations are in place, will use an export utility to
export the data from the target and then import and apply the transformation rules into the
target system. Again, if you have more specific questions,
feel free to talk me. I don’t know if we will have time to get to that today, but we will
be making a note of that and do what we can to get back to people further. Again, if you
have specific requirements, feel free to reach out to us and we will be happy to discuss
them with you and talk to the tools and what we have got. Great. Thank you, Richard, and in the slide
in front of you are several ways that you can engage with BioPharm and Valiance. So,
go ahead and read through those. In the meantime, I just want to thank everybody for joining
this webinar. We will be sending out a link to the recording as well as a PDF version
of this presentation within 24 hours. You can always visit www.BioPharm.com and access
the webinar there at any time. We have several different webinars coming
up including ones on Empirica Signal and also two others on Argus migrations, one specifically
from Airs to Argus and another one from Empirica Trees to Argus. So, feel free to register
for those. You can find the links on our website. And last but not least, in front of you is
a slide that contains our contact information. You can reach out to us anytime and somebody
will get back to you as soon as possible. So, thank you so much for joining this presentation.
We hope that you found it helpful and we look forward to other webinars in the future. Thank you so much. Take care.

Leave a Reply

Your email address will not be published. Required fields are marked *