PKanalix 2020R1 single page user guide

1.PKanalix documentation #

Version 2020

This documentation is for PKanalix starting from 2019 version.
©Lixoft
PKanalix performs analysis on PK data set including:

  • The non compartmental analysis (NCA), which computes NCA parameters using the calculation of the \(\lambda_z\) – slope of the terminal elimination phase.
  • The compartmental analysis (CA), which finds parameters of a model representing the PK as the dynamics in compartments for each individual. It does not include population analysis that could be performed in Monolix.

What else?

  • A clear user-interface with a simple workflow to efficiently run the NCA and CA analysis.
  • Easily accessible PK models library and auto-initialization method to improve the convergence of the optimization of CA parameters.
  • Automatically generated results and plots to give an immediate feedback.
  • Interconnection with MonolixSuite application to export of project to Monolix for the population analysis.

and, starting from the 2020 version:

  • Filters of a dataset to easily perform the analysis on several data subsets without modifying the original file.
  • Module to select and scale output units for better analysis and reporting.
  • Flexibility: selection of NCA parameters for computation and display, stratification of results by categorical covariates and acceptance criteria for comparison, multiple partial AUC
  • Plots with statistics of observed data.
  • More interface features: dark theme, font size, choice of significant digits.

 

 





PKanalix tasks

Pkanalix uses the dataset format common for all MonolixSuite applications, see here for more details. It allows to move your project between applications, for instance export a CA project to Monolix and perform a population analysis with one “click” of a button. Based on a dataset, there are two tasks:

Non Compartmental analysis

The first main feature of PKanalix is the calculation of the parameters in the Non Compartmental Analysis framework.

This task consists in defining rules for the calculation of the \(\lambda_z\) (slope of the terminal elimination phase) to compute all the NCA parameters. This definition can be either global via general rules or manual on each individual, where the user selects or removes  points in the \(\lambda_z\) calculation.

Compartmental Analysis

The second main feature of PKanalix is the calculation of the parameters in the Compartmental Analysis framework. It consists in finding parameters of a model that represents the PK as the dynamics in compartments for each individual.

This task defines a structural model (based on a user-friendly PK models library) and estimates the parameters for all the individuals. Automatic initialization method improves the convergence of parameters for each individual.

All the NCA and/or CA outputs are automatically displayed in sortable tables in the Results tab. Moreover, they are exported in the result folder in a R-compatible format. Interactive plots give an immediate feedback and help to better interpret the results.

The usage of PKanalix is available not only via the user interface, but also via R with a dedicated R-package (detailed here). All the actions performed in the interface have their equivalent R-functions. It is particularly convenient for reproducibility purpose or batch jobs.





2.Data format for NCA and CA analysis #

The data set format used in PKanalix is the same as for the entire MonolixSuite, to allow smooth transitions between applications. The dosing information and the concentration measurements are recorded in a single data set. The dose information must be indicated for each individual, even if it is identical for all.

A data set typically contains the following columns: ID, TIME, OBSERVATION, AMOUNT. For IV infusion data, an additional INFUSION RATE or INFUSION DURATION is necessary. For steady-state data, STEADY-STATE and INTERDOSE INTERVAL column are added. Cells that do not contain information (e.g AMOUNT column on a line recording a measurement) have a dot. If a dose and a measurement occur at the same time, they can be encoded on the same line or on different lines. Sort and carry variables can be defined using the OCCASION, and CATEGORICAL COVARIATE and CONTINUOUS COVARIATE column-types. BLQ data are defined using the CENSORING and LIMIT column-types.

Headers are free but the columns must be assigned to one of the available column-types. The full list of column-types is available at the end of this page and a detailed description is given on the dataset documentation website.

Units are not recorded in PKanalix and it is not possible to convert the original data to adapt the units. It is the user’s responsibility to ensure that units are consistent in the data set (for instance if the amount is in mg, the concentrations in mg/L, then the calculated volumes will be in L). Similarly, the indicated amount must be an absolute value. It is not possible to give in a dose in mg/kg and let the software calculate the dose in mg based on the weight. This calculation must be done outside PKanalix.

Note that the NCA application does not allow to have in the dataset several observations at the same time for the same id.

Below we show how to encode the date set for most typical situations.

Plasma concentration data

Extravascular

For extravascular administration, the mandatory column-types are ID (individual identifiers as integers or strings), OBSERVATION (measured concentrations), AMOUNT (dose amount administered) and TIME (time of dose or measurement).

To distinguish the extravascular from the IV bolus case, in “Tasks>Run” the administration type must be set to “extravascular”.

If no measurement is recorded at the time of the dose, a concentration of zero is added for single dose data, the minimum concentration observed during the dose interval for steady-state data.

Example:

  • demo project_extravascular.pkx

This data set records the drug concentration measured after single oral administration of 150 mg of drug in 20 patients. For each individual, the first line records the dose (in the “Amount” column tagged as AMOUNT column-type) while the following lines record the measured concentrations (in the “Conc” column tagged as OBSERVATION). Cells of the “Amount” column on measurement lines contain a dot, and respectively for the concentration column. The column containing the times of measurements or doses is tagged as TIME column-type and the subject identifiers, which we will use as sort variable, are tagged as ID. Check the OCCASION section if more sort variables are needed. After accepting the dataset, the data is automatically assigned as “Plasma”.

In the “Tasks/Run” tab, the user must indicate that this is extravascular data. In the “Check lambda_z”, on linear scale for the y-axis, measurements originally present in the data are shown with full circles. Added data points, such as a zero concentration at the dose time, are represented with empty circles. Points included in the \(\lambda_z\) calculation are highlighted in blue.

After running the NCA analysis, PK parameters relevant to extravascular administration are displayed in the “Results” tab.

 

IV infusion

Intravenous infusions are indicated in the data set via the presence of an INFUSION RATE or INFUSION DURATION column-type, in addition to the ID (individual identifiers as integers or strings), OBSERVATION (measured concentrations), AMOUNT (dose amount administered) and TIME (time of dose or measurement). The infusion duration (or rate) can be identical or different between individuals.

In “Tasks>Run” the administration type must be set to “intravenous”.

If no measurement is recorded at the time of the dose, a concentration of zero is added for single dose data, the minimum concentration observed during the dose interval for steady-state data.

Example:

  • demo project_ivinfusion.pkx:

In this example, the patients receive an iv infusion over 3 hours. The infusion duration is recorded in the column called “TINF” in this example, and tagged as INFUSION DURATION.

In the “Tasks/Run” tab, the user must indicate that this is intravenous data.

 

IV bolus

For IV bolus administration, the mandatory column-types are ID (individual identifiers as integers or strings), OBSERVATION (measured concentrations), AMOUNT (dose amount administered) and TIME (time of dose or measurement).

To distinguish the IV bolus from the extravascular case, in “Tasks>Run” the administration type must be set to “intravenous”.

If no measurement is recorded at the time of the dose, the concentration of at time zero is extrapolated using a log-linear regression of the first two data points, or is taken to be the first observed measurement if the regression yields a slope >= 0. See the calculation details for more information.

Example:

  • demo project_ivbolus.pkx:

In this data set, 25 individuals have received an iv bolus and their plasma concentration have been recorded over 12 hours. For each individual (indicated in the column “Subj” tagged as ID column-type), we record the dose amount in a column “Dose”, tagged as AMOUNT column-type. The measured concentrations are tagged as OBSERVATION and the times as TIME. Check the OCCASION section if more sort variables are needed in addition to ID. After accepting the dataset, the data is automatically assigned as “plasma”.

In the “Tasks/Run” tab, the user must indicate that this is intravenous data. In the “Check lambda_z”, measurements originally present in the data are shown with full circles. Added data points, such as the C0 at the dose time, are represented with empty circles. Points included in the \(\lambda_z\) calculation are highlighted in blue.

After running the NCA analysis, PK parameters relevant to iv bolus administration are displayed in the “Results” tab.

 

Steady-state

Steady-state must be indicated in the data set by using the STEADY STATE column-type:

  • “1” indicates that the individual is already at steady-state when receiving the dose. This implicitly assumes that the individual has received many doses before this one.
  • “0” or ‘.’ indicates a single dose.

The dosing interval (also called tau) is indicated in the INTERDOSE INTERVAL, on the lines defining the doses.

Steady state calculation formulas will be applied for individuals having a dose with STEADY STATE = 1. A data set can contain individuals which are at steady-state and some which are not.

If no measurement is recorded at the time of the dose, the minimum concentration observed during the dose interval is added at the time of the dose for extravascular and infusion data. For iv bolus, a regression using the two first data points is performed. Only measurements between the dose time and dose time + interdose interval will be used.

Examples:

  • demo project_steadystate.pkx:

In this example, the individuals are already at steady-state when they receive the dose. This is indicated in the data set via the column “SteadyState” tagged as STEADY STATE column-type, which contains a “1” on lines recording doses. The interdose interval is noted on those same line in the column “tau” tagged as INTERDOSE INTERVAL. When accepting the data set, a “Settings” section appears, which allows to define the number of steady-state doses. This information is relevant when exporting to Monolix, but not used in PKanalix directly.

After running the NCA estimation task, steady-state specific parameters are displayed in the “Results” tab.

 

BLQ data

Below the limit of quantification (BLQ) data can be recorded in the data set using the CENSORING column:

  • “0” indicates that the value in the OBSERVATION column is the measurement.
  • “1” indicates that the observation is BLQ.

The lower limit of quantification (LOQ) must be indicated in the OBSERVATION column when CENSORING = “1”. Note that strings are not allowed in the OBSERVATION column (except dots). A different LOQ value can be used for each BLQ data.

When performing an NCA analysis, the BLQ data before and after the Tmax are distinguished. They can be replaced by:

  • zero
  • the LOQ value
  • the LOQ value divided by 2
  • or considered as missing

For a CA analysis, the same options are available, but no distinction is done between before and after Tmax. Once replaced, the BLQ data are handled as any other observation.

A LIMIT column can be added to record the other limit of the interval (in general zero). This value will not be used by PKanalix but can facilitate the transition from an NCA/CA analysis PKanalix to a population model with Monolix.

Note: the proper encoding of BLQ data can easily be done using Excel or R.
With R, the “CENSORING” column can be added to an existing “data” data frame using data$CENS <- ifelse(data$CONC=="BQL", 1, 0), for the case where the observation column is called CONC and BLQ data are originally recorded in this column with the string “BQL”. Then replace the “BQL” string by the LOQ value (here 0.2 for instance): data$CONC <- ifelse(data$CONC=="BQL", 0.2, data$CONC).
With Excel, type for instance =IF(E2="BQL",1,0) (assuming the column containing the observations is E) to generate the first value of the “CENSORING” column and then propagate to the entire column. Finally, replace the “BQL” string by the LOQ value.

Examples:

  • demo project_censoring.pkx: two studies with BLQ data with two different LOQ

In this dataset, the measurements of two different studies (indicated in the STUDY column, tagged as CATEGORICAL COVARIATE in order to be carried over) are recorded. For the study 101, the LOQ is 1.8 ug/mL, while it is 1 ug/mL for study 102. The BLQ data are marked with a “1” in the BLQ column, which is tagged as CENSORING. The LOQ values are indicated for each BLQ in the CONC column of measurements, tagged as OBSERVATION.

In the “Task>NCA>Run” tab, the user can choose how to handle the BLQ data. For the BLQ data before and after the Tmax, the BLQ data can be considered as missing (as if this data set row would not exist), or replaced by zero (default before Tmax), the LOQ value or the LOQ value divided by 2 (default after Tmax). In the “Check lambda_z” tab, the BLQ data are shown in green and displayed according to the replacement value.

For the CA analysis, the replacement value for all BLQ can be chosen in the settings of the “Run” tab (default is Missing). In the “Check init.” tab, the BLQ are again displayed in green, at the LOQ value (irrespective of the chosen method for the calculations).

Urine data

To work with urine data, it is necessary to record the time and amount administered, the volume of urine collected for each time interval, the start and end time of the intervals and the drug concentration in a urine sample of each interval. The time intervals must be continuous (no gaps allowed).

In PKanalix, the start and end times of the intervals are recorded in a single column, tagged as TIME column-type. In this way, the end time of an interval automatically acts as start time for the next interval. The concentrations are recorded in the OBSERVATION column. The volume column must be tagged as REGRESSOR column type. This general column-type of MonolixSuite data sets allows to easily transition to the other applications of the Suite. As several REGRESSOR columns are allowed, the user can select which REGRESSOR column should be used as volume. The concentration and volume measured for the interval [t1,t2] are noted on the t2 line. The volume value on the dose line is meaningless, but it cannot be a dot. We thus recommend to set it to zero.

A typical urine data set has the following structure. A dose of 150 ng has been administered at time 0. The first sampling interval spans from the dose at time 0 to 4h post-dose. During this time, 410 mL of urine have been collected. In this sample, the drug concentration is 112 ng/mL. The second interval spans from 4h to 8h, the collected urine volume is 280 mL and its concentration is 92 ng/mL. The third interval is marked on the figure: 390mL of uring have been collected from 8h to 12h.

The given data is used to calculate the intervals midpoints, and the excretion rates for each interval. This information is then used to calculate the \(\lambda_z\) and calculate urine-specific parameters. In “Tasks/Check lambda_z”, we display the midpoints and excretion rates. However, in the “Plots>Data viewer”, we display the measured concentrations at the end time of the interval.

Example:

  • demo project_urine.pkx: urine PK dataset

In this urine PK data set, we record the consecutive time intervals in the “TIME” column tagged as TIME. The collected volumes and measured concentration are in the columns “VOL” and “CONC”, respectively tagged as REGRESSOR and OBSERVATION. Note that the volume and concentration are indicated on the line of the interval end time. The volume on the first line (start time of the first interval, as well as dose line) is set to zero as it must be a double. This value will not be used in the calculations. Once the dataset is accepted, the observation type must be set to “urine” and the regressor column corresponding to the volume defined.

In “Tasks>Check lambda_z”, the excretion rate are plotted on the midpoints time for each individual. The choice of the lambda_z works as usual.

Once the NCA task has run, urine-specific PK parameters are displayed in the “Results” tab.

 

Occasions (“Sort” variables)

The main sort level are the individuals indicated in the ID column. Additional sort levels can be encoded using one or several OCCASION column(s). OCCASION columns contain integer values that permit to distinguish different time periods for a given individual. The time values can restart at zero or continue when switching from one occasion to the next one. The variables differing between periods, such as the treatment for a crossover study, are tagged as CATEGORICAL or CONTINUOUS COVARIATES (see below). The NCA and CA calculations will be performed on each ID-OCCASION combination. Each occasion is considered independent of the other occasions (i.e a washout is applied between each occasion).

Note: occasions columns encoding the sort variables as integers can easily be added to an existing data set using Excel or R.
With R, the “OCC” column can be added to an existing “data” data frame with a column “TREAT” using data$OCC <- ifelse(data$TREAT=="ref", 1, 2).
With Excel, assuming the sort variable is encoded in the column E with values “ref” and “test”, type =IF(E2="ref",1,2) to generate the first value of the “OCC” column and then propagate to the entire column:

Examples:

  • demo project_occasions1.pkx: crossover study with two treatments

The subject column is tagged as ID, the treatment column as CATEGORICAL COVARIATE and an additional column encoding the two periods with integers “1” and “2” as OCCASION column.

In the “Check lambda_z” (for the NCA) and the “Check init.” (for CA), each occasion of each individual is displayed. The syntax “1#2” indicates individual 1, occasion 2, according to the values defined in the ID and OCCASION columns.

In the “Individual estimates” output tables, the first columns indicate the ID and OCCASION (reusing the data set headers). The covariates are indicated at the end of the table. Note that it is possible to sort the table by any column, including, ID, OCCASION and COVARIATES.

The OCCASION values are available in the plots for stratification, in addition to possible CATEGORICAL or CONTINUOUS COVARIATES (here “TREAT”).

  • demo project_occasions2.pkx: study with two treatments and with/without food

In this example, we have three sorting variables: ID, TREAT and FOOD. The TREAT and FOOD columns are duplicated: once with strings to be used as CATEGORICAL COVARIATE (TREAT and FOOD) and once with integers to be used as OCCASION (occT and occF).

In the individual parameters tables and plots, three levels are visible. In the “Individual parameters vs covariates”, we can plot Cmax versus FOOD, split by TREAT for instance (Cmax versus TREAT split by FOOD is also possible).

Covariates (“Carry” variables)

Individual information that need to be carried over to output tables and plots must be tagged as CATEGORICAL or CONTINUOUS COVARIATES. Categorical covariates define variables with a few categories, such as treatment or sex, and are encoded as strings. Continuous covariates define variables on a continuous scale, such as weight or age, and are encoded as numbers. Covariates will not automatically be used as “Sort” variables. A dedicated OCCASION column is necessary (see above).

Covariates will automatically appear in the output tables. Plots of estimated NCA and/or CA parameters versus covariate values will also be generated. In addition, covariates can be used to stratify (split, color or filter) any plot. Statistics about the covariates distributions are available in table format in “Results > Cov. stat.” and in graphical format in “Plots > Covariate viewer”.

Note: It is preferable to avoid spaces and special characters (stars, etc) in the strings for the categories of the categorical covariates. Underscores are allowed.

Example:

  • demo project_covariates.pkx

In this data set, “SEX” is tagged as CATEGORICAL COVARIATE and “WEIGHT” as CONTINUOUS COVARIATE.

The “cov stat” table permits to see a few statistics of the covariate values in the data set. In the plot “Covariate viewer”, we see that the distribution of weight is similar for males and females.

After running the NCA and CA tasks, both covariates appear in the table of individual estimated parameters estimated.

In the plot “parameters versus covariates”, the parameters values are plotted as scatter plot with the parameter value (here Cmax and AUCINF_pred) on on y-axis and the weight value on the x-axis, and as boxplots for sex female and sex male.

All plots can be stratified using the covariates. For instance, the “Data viewer” can be colored by weight after having created 3 weight groups. Below we also show the plot “Distribution of the parameters” split by sex with selection of the AUCINF_pred parameter.

 

Other useful column-types

In addition to the typical cases presented above, we briefly present a few additional column-types that may be convenient.

  • ignoring a line: with the IGNORED OBSERVATION (ignores the measurement only) or IGNORED LINE (ignores the entire line, including regressor and dose information)
  • working with several types of observations: different type of observations can be recorded in the same dataset (for instance parent and metabolite concentrations). They are distinguished using the OBSERVATION ID column-type. One of the observation ids is then selected in the “Tasks>Run” section to perform the calculations.
  • working with several types of administrations: a single data set can contain different types of administrations, for instance IV bolus and extravascular, distinguished using the ADMINISTRATION ID column-type. The setting “administration type” in “Tasks>Run” can be chosen separately for each administration id, and the appropriate parameter calculations will be performed.

 

Overview of all column-types

Column-types used for all types of lines:

Column-types used for response-lines:

Column-types used for dose-lines:

 

 

Excel macro to adapt data format for PKanalix

We provide an Excel macro that can be used to adapt the formatting of your dataset for use in PKanalix. The macro takes as input the original dataset, in Excel or text format, and the dosing information separately and produces an adapted dataset with the doses in a new column. The macro can also translate information on missing observations, censored observations, urine volume, and occasions into the PKanalix-compatible format.

Macro download and documentation

2.1.Excel macro to adapt data format for PKanalix #

We provide an Excel macro that can be used to adapt the formatting of your dataset for use in PKanalix:

Download the Excel macro to adapt data formatting for PKanalix (version May 11th 2020)

The macro takes as input the original dataset, in Excel or text format, adds the dosing information separately and produces an adapted dataset with the doses in a new column. The macro can also translate information on missing observations, censored observations, urine volume, and occasions into the PKanalix-compatible format. The adapted dataset is saved as an Excel file and, if selected, as a CSV file for direct use in PKanalix.

More details of the actions performed by the macro are described below.

 

Steps to follow to adapt the data format:

  • Click on the button “Adapt data for PKanalix” and choose the dataset file to open in Excel
  • Fill the form to give information on the dataset:
    — In case of several sheets, select the name of the sheet to adapt
    — Select the columns containing ID, TIME and OBSERVATION information
    — In case of occasions, check the corresponding box and select the column containing the OCCASION information
    –In case of urine data, change the corresponding radio button and select the columns containing each collection interval START TIME and END TIME, and the column containing VOLUME information.
    — Give treatment information: single/multiple dose, time of first dose, dose amount (individual dose amounts can be read from a column of the dataset), number of doses, inter-dose interval, and infusion duration in case of IV infusion.
    — In case of censored observations, indicate the tag representing censored observations in the dataset if it is different than the default tags (“BLQ”, “BLOQ”, “CENS”), and give the value of LLOQ.
  • Click on “Adapt data” and wait for the result.

Result:

  • The adapted dataset is saved in a new .xlsx file with a name derived from the original name or user-specified.
  • If requested in the form, the adapted dataset is also saved in a new .CSV file. If the initial file has several sheets, the sheet name is added to the CSV file name.
  • If the file already exists, a new sheet is added to the file, unless a sheet with the same name already exists.
  • All changes from the original dataset are highlighted with colors:
    — blue = dosing time or amount,
    — gray = missing observation,
    — orange = censored observations,
    — green = urine volume at dosing time,
    — yellow = new occasions as integers.

Actions performed by the macro:

  • A column AMT is added, containing the same dosing regimen for all subjects or subject-occasions (with an option to read individual dose amounts from a column of the dataset). The column is colored in blue, as well as dosing times. Note that the dosing times are correct in the column tagged as TIME in the form, but other time columns will not be consistent.
  • In case of urine data, the volume is set to 0 at dosing times (colored in green) to avoid format errors, and both start and end times of the interval are set to the dosing time. The macro checks that time intervals are continuous (no gaps allowed). Only the column END TIME should be used as TIME in PKanalix.
  • In case of censored observations, a column CENS is added and colored in orange, containing 0 for non-censored observations and 1 for censored observations. The LLOQ is written instead of the censoring tag in the column of observations and colored as well.
  • Missing observations (all values that are not numbers) are replaced by “.” and colored in gray.
  • In case of occasions, if the occasion column does not contain only numbers, a new column OCC is added with occasion numbers and colored in yellow.
  • The first lines that do not contain data are concatenated into a single header row.
  • Rows that do not contain data are deleted.
  • Formatting is adapted for correct exporting to CSV format.

 

Examples

  • Single doses: the original data on the left contains concentration data. All individuals have received single doses at time 0, with individual dose amounts in the column Dose. The macro is used to add the dosing times. In the form, the columns Subject, Time, Conc are tagged as ID, TIME and OBSERVATION respectively. The column Dose is indicated in the form as containing the dose amounts, and the single dosing time is also specified in the form. In addition to adding the dosing times and the corresponding amounts in a new column AMT, the macro replaces the missing observations (any value in Conc that is not a number) by . compatible with PKanalix. The macro form can also be used to specify that <BLQ> is a flag for censored observations rather than missing ones, with a LLOQ of 1. In that case the macro adds a new column CENS and replaces <BLQ> by the LLOQ. Finally, the second line containing units is removed from the adapted dataset, because it can not be read by PKanalix.

     

  • Multiple doses: the original data on the left contains only ids, measurement times and concentrations. The macro is used to add doses. In the macro form, the columns Id, Time and Y are tagged as ID, TIME and OBSERVATION. A multiple dose regimen is specified with 5 doses starting at time -48 with interdose interval of 12 and level of 150. Running the macro with these settings adds a column AMT with 5 doses per individual.

         

  • Occasions: the original data on the left contains a time-varying covariate “Treatment”, which should be defined in different occasions for PKanalix. Running the macro with “Treatment” tagged as “Occasion” creates a new column OCC in the adapted dataset with different integer occasions corresponding to the different values of “Treatment”. Moreover, the dose level is read for each subject-occasion from the column “Dose”. Here a single dosing time at 0 has been specified in the macro form. Finally, the second line containing units is removed from the adapted dataset, because it can not be read by PKanalix.

         

  • Censored observations: on the example below (original data on the left, adapted data on the right), the column Period already contains occasions as integers, corresponding to the different values of FORM. Censored observations are flagged with “BLQ” in the column DV. The macro is used with the following settings: ID, time, DV and Period are tagged respectively as ID, TIME, OBSERVATION and OCCASION. A single dose is specified with value 600 at time 0, and the specified LLOQ value is 0.06. Running the macro adds two new columns AMT and CENS containing the doses and the censored observations flags. “BLQ” is replaced by 0.06 in the column DV. The column Period is left untouched since it is already compatible with PKanalix.

         

  • Urine data: the original data on the left contains urine concentration measurements in the column CONC, with the urine volume in VOL collected during the interval defined by START TIME and END TIME. Using the macro to add a single dose of 150 at time 0 to all individuals produces the data on the right. Since VOL is tagged as urine volume in the macro, the value 0 for VOL is added at dosing times, and the macro checks that the intervals are continuous. The missing concentration for the interval between times 24 and 48 for id 2 is replaced by a . compatible with PKanalix.

           

 

2.2.Units: display and scaling #

Starting on the 2020 version, PKanalix has a new feature:UNITS.

  • It displays units of the NCA and CA parameters calculated automatically from information provided by a user about units of measurements in a dataset. Units are shown in the results tables, on plots and are added to the saved files to give a physical meaning of the data.
  • It allows for scaling values of a dataset to represent outputs in preferred units, which facilitates the interpretation of the results. Scaling is done in the PKanalix data manager, and does not change the original data file.





PKanalix does not recognize units automatically. To define units correctly, a user needs to know units of the input data in a loaded dataset.

Units definition

Units of the NCA and CA parameters are considered as combinations of units of: time, concentration, amount and volume in case of urine type data. For instance, unit of AUC is [concentration unit * time unit]. These quantities correspond to columns in a typical PK dataset: time of measurements, observed concentration, dose amount and volume of collected urine.

“Units” block has the basic quantities – corresponding to the “input data” – on the left (green frame below). Units, listed on the right in drop-down menus (violet frame),  are considered as output units, that is these units are displayed in results and plots after running the NCA and CA tasks.

Concentration is a complex quantity defined as “amount per volume”. It has two separate units, which are linked (and equal) to AMT and VOLUME respectively. Changing one automatically updated the other one .

Units without data conversion: Output units correspond to units of the input data.

In other words, desired units of the NCA and CA parameters correspond to units of measurements in the dataset. In this case, select from the drop-down menus units of the input data and keep the default scaling factor (equal to one). All calculations are performed using values in the original dataset and selected units are displayed in the results tables and on plots.

Units conversion: output units are different from the units of the input data.

NCA and CA parameters can be calculated and displayed in any units, not necessarily the same as used in the dataset. The scaling factors (by default equal to 1) multiply the corresponding columns in a dataset and transform them to a dataset in new units. PKanalix shows data with new, scaled values. But, this conversion occurs only internally and the original dataset remains unchanged. So, in this case, select desired output units from the list and, knowing units of the input data, scale the original values to the new units. Reminder: a user needs to know the units used in the dataset to select correctly the scaling factor.

For instance, let measurement times in a dataset be in hours. To obtain the outputs in days, set the output time unit as “days” and the scaling factor to 1/24, as shown below. It reads as follows:

(values of time in hours from the dataset) * (1/24) = time in days.

After accepting the scaling, a dataset shown in the Data tab is converted internally to a new data set. It contains original values multiplied by scaling factors. Then, all computations in the NCA and CA tasks are performed using new values, so that the results  correspond to selected units.

Units for the NCA parameters

Rsq no unit
Rsq_adjusted no unit
Corr_XY no unit
No_points_lambda_z no unit
Lambda_z time-1
Lambda_z_lower time
Lambda_z_upper time
HL_lambda_z time
Span no unit
Lambda_z_intercept no unit
T0 time
Tlag time
Tmax_Rate time
Max_Rate amount.time-1
Mid_Pt_last time
Rate_last amount.time-1
Rate_last_pred amount.time-1
AURC_last amount
AURC_last_D no unit
Vol_UR volume
Amount_recovered amount
Percent_recovered %
AURC_all amount
AURC_INF_obs amount
AURC_PerCentExtrap_obs %
AURC_INF_pred amount
AURC_PerCentExtrap_pred %
C0 amount.volume-1
Tmin time
Cmin amount.volume-1
Tmax time
Cmax amount.volume-1
Cmax_D volume-1
Tlast time
Clast amount.volume-1
AUClast time.amount.volume-1
AUClast_D time.volume-1
AUMClast time2.amount.volume-1
AUCall time.amount.volume-1
AUCINF_obs time.amount.volume-1
AUCINF_D_obs time.volume-1
AUCINF_pred time.amount.volume-1
AUCINF_D_pred time.volume-1
AUC_PerCentExtrap_obs %
AUC_PerCentBack_Ext_obs %
AUMCINF_obs time2.amount.volume-1
AUMC_PerCentExtrap_obs %
Vz_F_obs volume
Cl_F_obs volume.time-1
Cl_obs volume.time-1
Cl_pred volume.time-1
Vss_obs volume
Clast_pred amount.volume-1
AUC_PerCentExtrap_pred %
AUC_PerCentBack_Ext_pred %
AUMCINF_pred time2.amount.volume-1
AUMC_PerCentExtrap_pred %
Vz_F_pred volume
Cl_F_pred volume.time-1
Vss_pred volume
Tau time
Ctau amount.volume-1
Ctrough amount.volume-1
AUC_TAU time.amount.volume-1
AUC_TAU_D time.volume-1
AUC_TAU_PerCentExtrap %
AUMC_TAU time2.amount.volume-1
Vz volume
Vz_obs volume
Vz_pred volume
Vz_F volume
CLss_F volume.time-1
CLss volume.time-1
CAvg amount.volume-1
FluctuationPerCent %
FluctuationPerCent_Tau %
Accumulation_index no unit
Swing no unit
Swing_Tau no unit
Dose amount
N_Samples no unit
MRTlast time
MRTINF_obs time
MRTINF_pred time

Units for the PK parameters

Units are available only in PKanalix. When exporting a project to Monolix, values of PK parameters are re-converted to the original dataset unit. Below, volumeFactor is defined implicitly as: volumeFactor=amtFactor/concFacto, where “factor” is the scaling factor used in the “Units” block in the Data tab.

PARAMETER UNIT INVERSE of UNITS
V (V1, V2, V3) volume value/volumeFactor
k (ka, k12, k21, k31, k13, Ktr) 1/time value*timeFactor
Cl, Q (Q2, Q3) volume/time value*timeFactor/volumeFactor
Vm amount/time value*timeFactor/amountFactor
Km concentration value/concFactor
T (Tk0, Tlag, Mtt) time value/timeFactor
alpha, beta, gamma 1/time value*timeFactor
A, B, C 1/volume value*volumeFactor

 

Units display

To visualise units, switch on the “units toggle” and accept a dataset. Then, after running the NCA and CA tasks, units are displayed:

  • In the results table next to the parameter name (violet frame), in the table copied with “copy” button (blue frame) and in the saved .txt files in the result folder (green frame).

  • On plots if the “units” display is switched on in the General plots settings

2.3.Filtering a data set #

Starting on the 2020 version, it is possible to filter your data set to only take a subpart into account in your modelization. It allows to make filters on some specific IDs, times, measurement values,… It is also possible to define complementary filters and also filters of filters. It is accessible through the filters item on the data tab.

Creation of a filter

To create a filter, you need to click on the data set name. You can then create a “child”. It corresponds to a subpart of the data set where you will define your filtering actions.

You can see on the top (in the green rectangle) the action that you will complete and you can CANCEL, ACCEPT, or ACCEPT & APPLY with the bottoms on the bottom.

Filtering actions

In all the filtering actions, you need to define

  • An action: it corresponds to one of the following possibilities: select ids, remove ids, select lines, remove lines.
  • A header: it corresponds to the column of the data set you wish to have an action on. Notice that it corresponds to a column of the data set that was tagged with a header.
  • An operator: it corresponds to the operator of choice (=, ≠, < ≤, >, or ≥).
  • A value. When the header contains numerical values, the user can define it. When the header contains strings, a list is proposed.

For example, you can

  • Remove the ID 1 from your study:

    In that case, all the IDs except ID = 1 will be used for the study.
  • Select all the lines where the time is less or equal 24:

    In that case, all lines with time strictly greater that 24 will be removed. If a subject has no measurement anymore, it will be removed from the study.
  • Select all the ids where SEX equals F:

    In that case, all the male will be removed of the study.
  • Remove all Ids where WEIGHT less or equal 65:

    In that case, only the subjects with a weight over 65 will be kept for the study.

In any case, the interpreted filter data set will be displayed in the data tab.

Filters with several actions

In the previous examples, we only did one action. It is also possible to do several actions to define a filter. We have the possibility to define UNION and/or INTERSECTION of actions.

INTERSECTION

By clicking by the + and – button on the right, you can define an intersection of actions. For example, by clicking on the +, you can define a filter corresponding to intersection of

  • The IDs that are different to 1.
  • The lines with the time values less than 24.

Thus in that case, all the lines with a time less than 24 and corresponding to an ID different than 1 will be used in the study. If we look at the following data set as an example


Initial data set
Resulting data set after action: select IDs ≠ 1
Considered data set for the study
as the intersection of the two actions
Resulting data set after action: select lines with time ≤ 24
UNION

By clicking by the + and – button on the bottom, you can define an union of actions. For example, in a  data set with a multi dose, I can focus on the first and the last dose. Thus, by clicking on the +, you can define a filter corresponding to union of

  • The lines where the time is strictly less than 12.
  • The lines where the time is greater than 72.


Initial data set

Resulting data set after action:
select lines where the time is strictly less than 12

Considered data set for the study
as the union of the three actions

Resulting data set after action:
select lines where the time is greater than 72

Resulting data set after action:
select lines where amt equals 40

Notice that, if just define the first two actions, all the dose lines at a time in ]12, 72[ will also be removed. Thus, to keep having all the doses, we need to add the condition of selecting the lines where the dose is defined.

In addition, it is possible to do any combination of INTERSECTION and UNION.

Other filers: filter of filter and complementary filters

Based on the definition of a filter, it is possible to define two other actions. By clicking on the filter, it is possible to create

  • A child: it corresponds to a new filter with the initial filter as the source data set.
  • A complement: corresponds to the complement of the filter. For example, if you defined a filter with only the IDs where the SEX is F, then the complement corresponds to the IDs where the SEX is not F.

3.Non Compartmental Analysis #

One of the main feature of PKanalix is the calculation of the parameters in the Non Compartmental Analysis framework.

 

NCA task

There is a dedicated task in the “Tasks” frame as in the following figure.
This task contains two different parts.

  • The first one called “Run” corresponds to calculation button and the settings for the calculation and the acceptance criteria. The meaning of all the settings and their default is defined here.
  • The second one allows to define the global settings impacting the calculation of \(\lambda_z\) (explained here) and the individual choice of measurements for the \(\lambda_z\) calculation (explained here) along with the vizualization of the regression line on each subject

 

NCA results

When computing of the NCA task is performed, it is possible to have the results in the “Results” frame. Two tables are proposed.



Non compartmental analysis results per individual

Individual estimates of the NCA parameters are displayed in the table in the tab  “INDIV. ESTIM.” part as in the following figure

All the computed parameters depend on the type of observation and administration. All the parameters are described here. Notice that on all tables, there is an icon on the top right to copy the table in a word or an excel file. 

 

Statistics on non compartmental analysis results

A summary table is also proposed in the tab “SUMMARY” as in the following figure

All the summary calculation is described here. Notice that on all tables, there is an icon on the top right to copy the table in a word or an excel file.

 

NCA plots

In the “Plots” frame, numerous plot associated to the individual parameters are displayed.

  • Correlation between NCA parameters: The purpose of this plot is to display scatter plots for each pair of parameters. It allows to identify correlations between parameters, which can be used to see the results of your analysis and see the coherence of the parameters for each individuals.
  • Distribution of the NCA parameters: The purpose of this plot is to see the empirical distribution of the parameters and thus have an idea of their distribution over the individuals.
  • NCA parameters w.r.t. covariates: The purpose of this plot is to display the individual parameters as a function of the covariates. It allows to identify correlation effects between the individual parameters and the covariates.


NCA outputs

After running the NCA task, the following files are available in the result folder:

  • ncaSummary.txt contains the summary of the NCA parameters calculation, in a format easily readable by a human (but not easy to parse for a computer)
  • ncaIndividualParametersSummary.txt contains the summary of the NCA parameters in a friendly computer format.
    • The first column corresponds to the name of the parameters
    • The second column corresponds to the CDISC name of the parameters
    • The other columns correspond to the several elements describing the summary of the parameters (as explained here)
  • ncaIndividualParameters.txt contains the NCA parameters for each subject-occasion along with the covariates.
    • The first line corresponds to the name of the parameters
    • The second line corresponds to the CDISC name of the parameters
    • The other lines correspond to the value of the parameters

The files ncaIndividualParametersSummary.txt and ncaIndividualParameters.txt can be exported in R for example using the following command

 read.table("/path/to/file.txt", sep = ",", header = T)

Remarks

  • To load the individual parameters using PKanalix name as headers, your just need to skip the second line
     ncaParameters = read.table("/path/to/file.txt", sep = ",",  header = T);
     ncaParameters[-1,] # to remove the CDISC name line
  • To load the individual parameters using CDISC as headers, your just need to skip the second line
     ncaParameters = read.table("/path/to/file.txt", sep = ",",  header = T, skip = 1)
  • The separator is the one defined in the user preferences. We set “,” in this example as it is the one by default.

3.1.Check lambda_z #

The \(\lambda_z\) is the slope of the terminal elimination phase. If \(\lambda_z\) can be estimated, NCA parameters will be extrapolated to infinity. A dedicated tab allows to visualize and control the points included in the calculation of \(\lambda_z\).





General rule

In the “Check lambda_z” tab, the data of each individual is plotted, on y-axis log-scale by default. A general rule applying on all individuals can be chosen on the right (green highlight) to select which points to include in the \(\lambda_z\) calculation:

  • “R2” chooses the number of points that optimize the correlation coefficient.
  • “Adjusted R2” (called “Best Fit” in Winonlin) optimizes an adjusted correlation coefficient, taking into account the number of points. The “maximum number of points” and “minimum time” allow to further restrict the points to select.
  • “Interval” permits to select all points within a given time interval.
  • “Points” selects the n last points.

See the Settings page and Calculations page for more details. In addition to the general rule, it is possible to more precisely define which points to include for each individual (see below).

The calculation of the \(\lambda_z\) is done via a linear regression, which can be weighted. The weighting can also be chosen of the right (blue highlight).

The points included in the lambda_z are visualized in the following way:

  • points used for the \(\lambda_z\) calculation in blue.
  • points not used for the \(\lambda_z\) calculation in grey.
  • the \(\lambda_z\) curve in green

Manual choice of measurements for the \(\lambda_z\) calculation

It is also possible to choose the measurements used for the \(\lambda_z\) calculation for each individual. This can be done by defining a range or selecting specific points to include or exclude.

Changing the range

For a given individual, a range can be selected using the grey arrow at the bottom. Points that will be added are colored in light blue and the points that will be removed are colored in dark grey as in the following figure.

To take into account the modification for the calculation of the \(\lambda_z\), you need to click on the button on the top right of the interface as in the following figure. Several individuals can be modified before clicking on “Apply include/exclude”.

Then the points and the \(\lambda_z\) curve will be updated as in the following figure. Notice that a “return arrow” appeared in the top left of the figure. It resets this individual to the general rule. The button “Reset to general rule” in the “Apply include/exclude” menu allows to reset all individuals to the general rule.

Include/excluding single points

Points can be added or remove from the list of points used to calculate \(\lambda_z\):

  • If you click on a unused (grey) measurement, it will change its color into light blue and be selected for inclusion.
  • If you click on a used (blue) measurement, it will change its color into dark grey and be selected for exclusion.

These changes are applied when clicking on the “Apply include/exclude” button. Individuals can be reset to the general rule one by one by using the return button at the top left, or all together using the “Reset to general rule” button in the “Apply include/exclude” menu.

Remarks

  • If the general rule is modified, only the individuals for which the general rule applies will be updated. Individual for which the points have been manually modified will not be updated.
  • If the \(\lambda_z\) can not be computed, the background of the plot will be in red as in the following figure.

3.2.Data processing and calculation rules #

This page presents the rules applied to pre-process the data set and the calculation rules applied for the NCA analysis.

 

Data processing

Ignored data

All observation points occurring before the last dose recorded for each individual are excluded. Observation points occurring at the same time as the last dose are kept, irrespective of their position in the data set file.

Note that for plasma data, negative or zero concentrations are not excluded.

Forbidden situations

For plasma data, mandatory columns are ID, TIME, OBSERVATION, and AMOUNT. For urine data, mandatory columns are ID, TIME, OBSERVATION, AMOUNT and one REGRESSOR (to define the volume).

Two observations at the same time point will generate an error.

For urine data, negative or null volumes and negative observations generate an error.

Additional points

For plasma data, if an individual has no observation at dose time, a value is added:

  • Extravascular and Infusion data: For single dose data, a concentration of zero. For steady-state, the minimum value observed during the dosing interval.
  • IV Bolus data: the concentration at dose time (C0) is extrapolated using a log-linear regression (i.e log(concentration) versus time) with uniform weight of first two data points. In the following cases, C0 is taken to be the first observed measurement instead (can be zero or negative):
    • one of the two observations is zero
    • the regression yields a slope >= 0

BLQ data

Measurements marked as BLQ data with a “1” in the CENSORING column will be replaced by zero, the LOQ value or the LOQ value divided by 2, or considered as missing (i.e excluded) depending on the setting chosen. They are then handled as any other measurement. The LOQ value is indicated in the OBSERVATION column of the data set.

Steady-state

Steady-state is indicated using the STEADY-STATE and INTERDOSE INTERVAL column-types. Equal dosing intervals are assumed. Observation points occurring after the dose time + interdose interval are excluded for Cmin and Cmax, but not for lambda_z. Dedicated parameters are computed such as the AUC in the interdose interval, and some specific formula should be considered for the clearance and the volume for example. More details can be found here.

Urine

Urine data is assumed to be single-dose, irrespective of the presence of a STEADY-STATE column. For the NCA analysis, the data is not used directly. Instead the intervals midpoints and the excretion rate for each interval (amount eliminated per unit of time) are calculated and used:

\( \textrm{midpoint} = \frac{\textrm{start time } + \textrm{ end time}}{2} \)

\( \textrm{excretion rate} = \frac{\textrm{concentration } \times \textrm{ volume}}{\textrm{end time } – \textrm{ start time}} \)

 

Calculation rules

Lambda_z

PKanalix tries to estimate the slope of the terminal elimination phase, called \( \lambda_z \), as well as the intercept called Lambda_z_intercept. \( \lambda_z \) is calculated via a linear regression between Y=log(concentrations) and the X=time. Several weightings are available for the regression: uniform, \( 1/Y\) and \(1/Y^2\).

Zero and negative concentrations are excluded from the regression (but not from the NCA parameter calculations). The number of points included in the linear regression can be chosen via the “Main rule” setting. In addition, the user can define specific points to include or exclude for each individual (see Check lambda_z page for details). When one of the automatic “main rules” is used, points prior to Cmax, and the point at Cmax for non-bolus models are never included. Those points can however be included manually by the user. If \( \lambda_z \) can be estimated, NCA parameters will be extrapolated to infinity.

R2 rule: the regression is done with the three last points, then four last points, then five last points, etc. If the R2 for n points is larger than or equal to the R2 for (n-1) points – 0.0001, then the R2 value for n points is used.  Additional constrains of the measurements included in the \( \lambda_z \) calculation can be set using the “maximum number of points” and “minimum time” settings. If strictly less than 3 points are available for the regression or if the calculated slope is positive, the \( \lambda_z \) calculation fails.

Adjusted R2 rule: the regression is done with the three last points, then four last points, then five last points, etc. For each regression the adjusted R2 is calculated as:

\( \textrm{Adjusted R2} = 1 – \frac{(1-R^2)\times (n-1)}{(n-2)} \)

with (n) the number of data points included and (R^2) the square of the correlation coefficient.
If the adjusted R2 for n points is larger than or equal to the adjusted R2 for (n-1) points – 0.0001, then the adjusted R2 value for n points is used. Additional constrains of the measurements included in the \( \lambda_z \) calculation can be set using the “maximum number of points” and “minimum time” settings. If strictly less than 3 points are available for the regression or if the calculated slope is positive, the \( \lambda_z \) calculation fails.

Interval: strictly positive concentrations within the given time interval are used to calculate \( \lambda_z \). Points on the interval bounds are included. Semi-open intervals can be defined using +/- infinity.

Points: the n last points are used to calculate \( \lambda_z \). Negative and zero concentrations are excluded after the selection of the n last points. As a consequence, some individuals may have less than n points used.

 

AUC calculation

The following linear and logarithmic rule apply to calculate the AUC and AUMC over an interval [t1, t2] where the measured concentrations are C1 and C2. The total AUC is the sum of the AUC calculated on each interval. If the logarithmic AUC rule fails in an interval because C1 or C2 are null or negative, then the linear interpolation rule will apply for that interval.

Linear formula: 

\( AUC |_{t_1}^{t_2} = (t_2-t_1) \times \frac{C_1+C_2}{2}  \)

\( AUMC |_{t_1}^{t_2} = (t_2-t_1) \times \frac{t_1 \times C_1+ t_2 \times C_2}{2}  \)

Logarithmic formula:

\( AUC |_{t_1}^{t_2} = (t_2-t_1) \times \frac{C_2 – C_1}{\ln(\frac{C_2}{C_1})}  \)

\( AUMC |_{t_1}^{t_2} = (t_2-t_1) \times \frac{t_2 \times C_2 – t_1 \times C_1}{\ln(\frac{C_2}{C_1})}  – (t_2-t_1)^2 \times \frac{C_2 – C_1}{\ln(\frac{C_2}{C_1})^2}  \)

 

Interpolation formula for partial AUC

When a partial AUC is requested at time points not included is the original data set, it is necessary to add an additional measurement point. Those additional time points can be before or after the last observed data point.

Note that the partial AUC is not computed if a bound of the interval falls before the dosing time.

 

Additional point before last observed data point

Depending on the choice of the “Integral method” setting, this can be done using a linear or log formula to find the added concentration C* at requested time t*, given that the previous and following measurements are C1 at t1 and C2 at t2.

Linear interpolation formula:     

\( C^* = C_1 + \left| \frac{t^*-t_1}{t_2-t_1} \right| \times (C_2-C_1) \)

Logarithmic interpolation formula:     

\( C^* = \exp \left( \ln(C_1) + \left| \frac{t^*-t_1}{t_2-t_1} \right| \times (\ln(C_2)-\ln(C_1))  \right) \)

If the logarithmic interpolation rule fails in an interval because C1 or C2 are null or negative, then the linear interpolation rule will apply for that interval.

 

Additional point after last observed data point

If \( \lambda_z \) is not estimable, the partial area will not be calculated. Otherwise, \( \lambda_z \) is used to calculate the additional concentration C*:

\( C^* = \exp(\textrm{Lambda_z_intercept} – \lambda_z \times t) \)

 

3.3.NCA parameters #

The following page describes all the parameters computed by the non compartmental analysis. Parameter names are fixed and cannot be changed.

Parameters related to \(\lambda_z\)

Name PKPARMCD CDISC PKPARM CDISC UNITS DESCRIPTION
Rsq R2 R Squared no unit Goodness of fit statistic for the terminal (log-linear) phase between the linear regression and the data
Rsq_adjusted R2ADJ R Squared Adjusted no unit Goodness of fit statistic for the terminal elimination phase, adjusted for the number of points used in the estimation of \(\lambda_z\)
Corr_XY CORRXY Correlation Between TimeX and Log ConcY no unit Correlation between time (X) and log concentration (Y) for the points used in the estimation of \(\lambda_z\)
No_points_lambda_z LAMZNPT Number of Points for Lambda z no unit Number of points considered in the \(\lambda_z\) regression
Lambda_z LAMZ Lambda z 1/time First order rate constant associated with the terminal (log-linear) portion of the curve. Estimated by linear regression of time vs. log concentration
Lambda_z_lower LAMZLL Lambda z Lower Limit time Lower limit on time for values to be included in the \(\lambda_z\) calculation
Lambda_z_upper LAMZUL Lambda z Upper Limit time Upperlimit on time for values to be included in the \(\lambda_z\) calculation
HL_Lambda_z LAMZHL Half-Life Lambda z time Terminal half-life
= ln(2)/Lambda_z
Lambda_z_intercept no unit Intercept found during the regression for (\lambda_z\), i.e. value of the regression (in log-scale) at time 0, i.e. the regression writes
log(Concentration) = -Lambda_z*t+Lambda_z_intercept
Span no unit Ratio between the sampling interval of the measurements used for the \(\lambda_z\) and the terminal half-life
= (Lambda_z_upper – Lambda_z_lower)*Lambda_z/ln(2)

Parameters related to plasma/blood measurements

Name PKPARMCD CDISC PKPARM CDISC UNITS DESCRIPTION
Tlag TLAG Time Until First Nonzero Conc time temps entre la dose et le dernier temps après la dose où OBS=0 ou OBS=LOQ Tlag is the time prior to the first measurable (non-zero) concentration. Tlag is 0 if the first observation after the last dose is not 0 or LOQ. The value is set to 0 for non extravascular input
T0 time Time of the dose
Dose amount Amount of the dose
N_Samples no unit Number of samples in the individuals.
C0 C0 Initial Conc amount/volume If a PK profile does not contain an observation at dose time (C0), the following value is added
Extravascular and Infusion data. For single dose data, a concentration of zero.  For steady-state, the minimum observed during the dose interval.
IV Bolus data. Log-linear regression of first two data points to back-extrapolate C0.
Tmin TMIN Time of CMIN Observation time Time of minimum concentration sampled during a dosing interval.
Cmin CMIN Min Conc amount/volume Minimum observed concentration, occurring at Tmin. If not unique, then the first minimum is used.
Tmax TMAX Time of CMAX time Time of maximum observed concentration.
– For non-steady-state data, the entire curve is considered.
– For steady-state data, Tmax corresponds to points collected during a dosing interval.
If the maximum observed concentration is not unique, then the first maximum is used.
Cmax CMAX Max Conc amount/volume Maximum observed concentration, occurring at Tmax. If not unique, then the first maximum is used.
Cmax_D CMAXD Max Conc Norm by Dose 1/volume Maximum observed concentration divided by dose.
Cmax_D = Cmax/Dose
Tlast TLST Time of Last Nonzero Conc time Last time point with measurable concentration
Clast CLST Last Nonzero Conc amount/volume Concentration of last time point with measurable concentration
AUClast AUCLST AUC to Last Nonzero Conc time.amount/volume Area under the curve from the time of dosing to the last measurable positive concentration. The calculation depends on the Integral method setting.
AUClast_D AUCLSTD AUC to Last Nonzero Conc Norm by Dose time/volume Area under the curve from the time of dosing to the last measurable concentration divided by the dose.
AUClast_D = AUClast/Dose
AUMClast AUMCLST AUMC to Last Nonzero Conc time2.amount/volume Area under the moment curve from the time of dosing to the last measurable concentration.
MRTlast MRTIVLST MRT Intravasc to Last Nonzero Conc time [if intravascular] Mean residence time from the time of dosing to the time of the last measurable concentration, for a substance administered by intravascular dosing.
MRTlast_iv = AUMClast/AUClast – TI/2, where TI represents infusion duration.
MRTlast MRTEVLST MRT Extravasc to Last Nonzero Conc time [if extravascular] Mean residence time from the time of dosing to the time of the last measurable concentration for a substance administered by extravascular dosing.
MRTlast_ev = AUMClast/AUClast – TI/2, where TI represents infusion duration.
AUCall AUCALL AUC All time.amount/volume Area under the curve from the time of dosing to the time of the last observation.
If the last concentration is positive AUClast=AUCall.
Otherwise, AUCall will not be equal to AUClast as it includes the additional area from the last measurable concentration down to zero or negative observations.
AUCINF_obs AUCIFO AUC Infinity Obs time.amount/volume AUC from Dosing_time extrapolated to infinity, based on the last observed concentration.
AUCINF_obs = AUClast + Clast/Lambda_z
AUCINF_D_obs AUCIFOD AUC Infinity Obs Norm by Dose time/volume AUCINF_obs divided by dose
AUCINF_D_obs = AUCINF_obs/Dose
AUC_%Extrap_obs AUCPEO AUC %Extrapolation Obs % Percentage of AUCINF_obs due to extrapolation from Tlast to infinity.
AUC_%Extrap_obs = 100*(1- AUClast / AUCINF_obs)
AUC_%Back_Ext_obs AUCPBEO AUC %Back Extrapolation Obs % Applies only for intravascular bolus dosing. the percentage of AUCINF_obs that was due to back extrapolation to estimate C(0).
AUMCINF_obs AUMCIFO AUMC Infinity Obs time2.amount/volume Area under the first moment curve to infinity using observed Clast
AUMCINF_obs = AUMClast + (Clast/Lambda_z)*(Tlast + 1.0/Lambda_z)
AUMC_%Extrap_obs AUMCPEO AUMC % Extrapolation Obs % Extrapolated (% or total) area under the first moment curve to infinity using observed Clast
AUMC_%Extrap_obs = 100*(1- AUMClast / AUMCINF_obs)
MRTINF_obs MRTIVIFO MRT Intravasc Infinity Obs time [if intravascular] Mean Residence Time extrapolated to infinity for a substance administered by intravascular dosing using observed Clast
MRTINF_obs_iv = AUMCINF_obs/AUCINF_obs- TI/2, where TI represents infusion duration.
MRTINF_obs MRTEVIFO MRT Extravasc Infinity Obs time [if extravascular] Mean Residence Time extrapolated to infinity for a substance administered by extravascular dosing using observed Clast
MRTINF_obs_ev = AUMCINF_obs/AUCINF_obs- TI/2, where TI represents infusion duration.
Vz_F_obs VZFO Vz Obs by F volume [if extravascular] Volume of distribution associated with the terminal phase divided by F (bioavailability)
Vz_F_obs = Dose/Lambda_z/AUCINF_obs
Cl_F_obs CLFO Total CL Obs by F volume/time [if extravascular] Clearance over F (based on observed Clast)
Cl_F_obs = Dose/AUCINF_obs
Vz_obs VZO Vz Obs volume [if intravascular] Volume of distribution associated with the terminal phase
Vz_obs= Dose/Lambda_z/AUCINF_obs
Cl_obs CLO Total CL Obs volume/time  [if intravascular] Clearance (based on observed Clast)
Cl_obs = Dose/AUCINF_obs
Vss_obs VSSO Vol Dist Steady State Obs volume  [if extravascular] An estimate of the volume of distribution at steady state based last observed concentration.
Vss_obs = MRTINF_obs*Cl_obs
Clast_pred amount/volume Clast_pred = exp(Lambda_z_intercept- Lambda_z* Tlast)
The values alpha (corresponding to the y-intercept obtained when calculating \(\lambda_z\)) and lambda_z are those values found during the regression
for \(\lambda_z\)
AUCINF_pred AUCIFP AUC Infinity Pred time.amount/volume Area under the curve from the dose time extrapolated to infinity, based on the last predicted concentration, i.e., concentration at the final observation time estimated using the linear regression performed to estimate \(\lambda_z\) .
AUCINF_pred = AUClast + Clast_pred/Lambda_z
AUCINF_D_pred AUCIFPD AUC Infinity Pred Norm by Dose time/volume AUCINF_pred divided by dose
= AUCINF_pred/Dose
AUC_%Extrap_pred AUCPEP AUC %Extrapolation Pred % Percentage of AUCINF_pred due to extrapolation from Tlast to infinity
AUC_%Extrap_pred = 100*(1- AUClast / AUCINF_pred)
AUC_%Back_Ext_pred AUCPBEP AUC %Back Extrapolation Pred % Applies only for intravascular bolus dosing. The percentage of AUCINF_pred that was due to back extrapolation to estimate C(0).
AUMCINF_pred AUMCIFP AUMC Infinity Pred time2.amount/volume Area under the first moment curve to infinity using predicted Clast
AUMCINF_pred = AUMClast + (Clast_pred/Lambda_z)*(Tlast+1/Lambda_z)
AUMC_%Extrap_pred AUMCPEP AUMC % Extrapolation Pred % Extrapolated (% or total) area under the first moment curve to infinity using predicted Clast
AUMC_%Extrap_pred = 100*(1- AUMClast / AUMCINF_pred)
MRTINF_pred MRTIVIFP MRT Intravasc Infinity Pred time [if intravascular] Mean Residence Time extrapolated to infinity for a substance administered by intravascular dosing using predicted Clast
MRTINF_pred MRTEVIFP MRT Extravasc Infinity Pred time [if extravascular] Mean Residence Time extrapolated to infinity for a substance administered by extravascular dosing using predicted Clast
Vz_F_pred VZFP Vz Pred by F volume [if extravascular] Volume of distribution associated with the terminal phase divided by F (bioavailability)
= Dose/Lambda_z/AUCINF_pred
Cl_F_pred CLFP Total CL Pred by F volume/time [if extravascular] Clearance over F (using predicted Clast)
Cl_F_pred = Dose/AUCINF_pred
Vz_pred VZP Vz Pred volume [if intravascular] Volume of distribution associated with the terminal phas
Vz_pred = Dose/Lambda_z/AUCINF_pred
Cl_pred CLP Total CL Pred volume/time [if intravascular] Clearance (using predicted Clast)
= Dose/AUCINF_pred
Vss_pred VSSP Vol Dist Steady State Pred volume [if intravascular] An estimate of the volume of distribution at steady state based on the last predicted concentration.
Vss_pred = MRTINF_pred*Cl_pred
AUC_lower_upper AUCINT AUC from T1 to T2 time.amount/volume AUC from T1 to T2 (partial AUC)
AUC_lower_upper_D AUCINTD AUC from T1 to T2 Norm by Dose time/volume AUC from T1 to T2 (partial AUC) divided by Dose
CAVG_lower_upper CAVGINT Average Conc from T1 to T2 amount/volume Average concentration from T1 to T2

 

Parameters related to plasma/blood measurements specific to steady state dosing regimen

In the case of repeated doses, dedicated parameters are used to define the steady state parameters and some specific formula should be considered for the clearance and the volume for example. Notice that all the calculation dedicated to the area under the first moment curve (AUMC) are not relevant.

Name PKPARMCD CDISC PKPARM CDISC UNITS DESCRIPTION
Tau time The (assumed equal) dosing interval for steady-state data.
Ctau CTROUGH Conc Trough amount/volume Concentration at end of dosing interval.
If the observed concentration does not exist, the value is interpolated or extrapolated.
AUC_TAU AUCTAU AUC Over Dosing Interval time.amount/volume The area under the curve (AUC) for the defined interval between doses (TAU). The calculation depends on the Integral method setting.
AUC_TAU_D AUCTAUD AUC Over Dosing Interval Norm by Dose time/volume The area under the curve (AUC) for the defined interval between doses (TAU) divided by the dose.
AUC_TAU_D = AUC_TAU/Dose
AUC_TAU_%Extrap %
 Percentage of AUC due to extrapolation in steady state.
 AUC_TAU_%Extrap = 100*(AUC [Tlast, tau] if Tlast<=tau)/AUC_TAU;
AUMC_TAU AUMCTAU AUMC Over Dosing Interval time2.amount/volume The area under the first moment curve (AUMC) for the defined interval between doses (TAU).
Vz_F VZFTAU Vz for Dose Int by F volume [if extravascular] The volume of distribution associated with the terminal slope following extravascular administration divided by the fraction of dose absorbed, calculated using AUC_TAU.
Vz_F =  Dose/Lambda_z/AUC_TAU 
Vz VZTAU Vz for Dose Int volume [if intravascular] The volume of distribution associated with the terminal slope following intravascular administration, calculated using AUC_TAU.
Vz =  Dose/Lambda_z/AUC_TAU 
CLss_F CLFTAU Total CL by F for Dose Int volume/time [if extravascular] The total body clearance for extravascular administration divided by the fraction of dose absorbed, calculated using AUC_TAU .
CLss_F =  Dose/AUC_TAU 
CLss CLTAU Total CL for Dose Int volume/time [if intravascular] The total body clearance for intravascular administration, calculated using AUC_TAU.
CLss =  Dose/AUC_TAU 
Cavg CAVG Average Concentration amount/volume AUCTAU divided by Tau.
Cavg = AUC_TAU /Tau
Fluctuation% FLUCP Fluctuation% % The difference between Cmin and Cmax standardized to Cavg, between dose time and Tau.
Fluctuation%  = 100.0* (Cmax -Cmin)/Cavg 
Fluctuation%_Tau % The difference between Ctau and Cmax standardized to Cavg, between dose time and Tau.
Fluctuation% _Tau  = 100.0* (Cmax -Ctau)/Cavg
Accumulation_Index AILAMZ Accumulation Index using Lambda z no unit Theoretical accumulation ratio: Predicted accumulation ratio for area under the curve (AUC) calculated using the Lambda z estimated from single dose data.
Accumulation_Index = 1.0/(1.0 -exp(-Lambda_z*Tau))
Swing no unit The degree of fluctuation over one dosing interval at steady state
Swing = (Cmax -Cmin)/Cmin
Swing_Tau no unit Swing_Tau = (Cmax -Ctau)/Ctau
Tmin TMIN Time of CMIN observation. time Time of minimum concentration sampled during a dosing interval.
Cmin CMIN Min Conc amount/volume Minimum observed concentration between dose time and dose time + Tau.
Cmax CMAX Max Conc amount/volume Maximum observed concentration between dose time and dose time + Tau.
MRTINF_obs MRTIVIFO or MRTEVIFO MRT Intravasc Infinity Obs or MRT Extravasc Infinity Obs time Mean Residence Time extrapolated to infinity using predicted Clast, calculated using AUC_TAU.

 

Parameters related to urine measurements

Name PKPARMCD CDISC PKPARM CDISC UNITS DESCRIPTION
T0 time Time of the last administered dose (assumed to be zero unless
otherwise specified).
Dose amount Amount of the dose
N_Samples no unit Number of samples in the individuals.
Tlag TLAG Time Until First Nonzero Conc time Midpoint prior to the first measurable (non-zero) rate.
Tmax_Rate ERTMAX Midpoint of Interval of Maximum ER time Midpoint of collection interval associated with the maximum observed excretion rate.
Max_Rate ERMAX Max Excretion Rate amount/time Maximum observed excretion rate.
Mid_Pt_last ERTLST Midpoint of Interval of Last Nonzero ER time Midpoint of collection interval associated with Rate_last.
Rate_last ERLST Last Meas Excretion Rate amount/time Last measurable (positive) rate.
AURC_last AURCLST AURC to Last Nonzero Rate amount Area under the urinary excretion rate curve from time 0 to the last
measurable rate.
AURC_last_D AURCLSTD AURC to Last Nonzero Rate Norm by Dose no unit The area under the urinary excretion rate curve (AURC) from time zero to the last measurable rate, divided by the dose.
Vol_UR VOLPK Sum of Urine Vol volume Sum of Urine Volumes
Amount_Recovered amount Cumulative amount eliminated.
Percent_Recovered % 100*Amount_Recovered/Dose
AURC_all AURCALL AURC All amount Area under the urinary excretion rate curve from time 0 to the last rate. This equals AURC_last if the last rate is measurable.
AURC_INF_obs AURCIFO AURC Infinity Obs amount Area under the urinary excretion rate curve extrapolated to infinity, based on the last observed excretion rate.
AURC_%Extrap_obs AURCPEO AURC % Extrapolation Obs % Percent of AURC_INF_obs that is extrapolated
AURC_INF_pred AURCIFP AURC Infinity Pred amount Area under the urinary excretion rate curve extrapolated to infinity, based on the last predicted excretion rate.
AURC_%Extrap_pred AURCPEP AURC % Extrapolation Pred % Percent of AURC_INF_pred that is extrapolated
AURC_lower_upper AURCINT AURC from T1 to T2 (partial AUC) amount The area under the urinary excretion rate curve (AURC) over the interval from T1 to T2.
Rate_last_pred amount/time The values alpha and Lambda_z are those values found during the regression
for lambda_z

3.4.NCA settings #

The following page describes all the settings for the parameters calculations.

Calculations

These settings corresponds to the settings impacting the calculation of the NCA parameters.

  • Administration type: Intravenous or Extravascular. This defines the drug type of administration. IV bolus and IV infusions must be set as “intravenous”.
  • Integral method: Method for AUC and AUMC calculation and interpolation. Trapezoidal refers to formula to calculate the AUC and Interpolation refers to the formula to add an additional point in case of partial AUC with a time point not originally present in the data set. See the Calculation rules for details on the formulas.
    • Linear Trapezoidal Linear (Linear trapezoidal, linear interpolation): For AUC calculation linear formula. For interpolation linear formula.
    • Linear Log Trapezoidal (Linear/log trapezoidal, linear/log interpolation): For AUC, linear before Cmax and log after Cmax. For interpolation, linear before Cmax and log after.
    • Linear Up Log Down (LinUpLogDown trapezoidal, LinUpLogDown interpolation): For AUC, linear if concentration is going up or is stable, log if going down. For interpolation, linear if concentration is going up or is stable, log if going down.
    • Linear Trapezoidal linear/log (Linear trapezoidal, linear/log interpolation): For AUC, linear formula. For interpolation, linear before Cmax, log after Cmax. If several Cmax, the first one is used.

    For all methods, if an observation value is less than or equal to zero, the program defaults to the linear trapezoidal or interpolation rule for that point. Similarly, if adjacent observations values are equal to each other, the program defaults to the linear trapezoidal or interpolation rule.

  • Partial AUC time: Define if the user would like to compute a partial AUC in a specific time interval. It is possible to define only one partial AUC, applicable to all individuals.
  • BLQ before Tmax: In case of BLQ measurements, this corresponds to the method by which the BLQ data before Tmax should be replaced. Possible methods are “missing”, “LOQ”, “LOQ/2” or “0” (default value is 0).
  • BLQ after Tmax: In case of BLQ measurements, this corresponds to the method by which the BLQ data after Tmax should be replaced. Possible methods are “missing”, “LOQ”, “LOQ/2” or “0” (default value is LOQ/2).




Settings impacting the \(\lambda_z\) calculation

This settings are impacting the calculation of \(\lambda_z\). See also the Check lambda_z page.

  • Main rule for the \(\lambda_z\) estimation. It corresponds to the rule to define the measurements used for the \(\lambda_Z\) calculation. Possible rules are “R2”, “interval”, “points” or “adjustedR2” (called Best Fit in Winonlin) (default value is “adjustedR2”). See the calculation rules for more details.
    • In case of “R2” or “adjustedR2” rule, the user has the possibility to define the maximum number of points and/or minimum time for the \(\lambda_z\) estimation. It allows to constrain the points tested for inclusion.
    • In case of “Interval” rule, the user has the possibility to define the time interval to consider.
    • In case of “Points” rule, the user has the possibility to define the number of points to consider.
  • Weighting method used for the regression that estimates \(\lambda_z\). Possible methods are “Y”, “Y2” or “uniform” (default value is “uniform”).

Selecting NCA parameters to compute





 

Acceptance criteria

These settings corresponds to the settings impacting the acceptance criteria. When acceptance criteria are defined, flags indicating if the condition is met or not are added to the output result table. Statistics in the summary table can be calculated using only individuals who satisfy one or several acceptance criteria. The filtering option is available directly in the Summary sub-tab of the NCA results.

  • Adjusted R2: It corresponds to the threshold of the adjusted R2 for the estimation of \(\lambda_z\). If activated, it will fill the value of Flag_Rsq_adjusted: if Rsq_adjusted > threshold, then Flag_Rsq_adjusted=1. The default value is .98.
  • % extrapolated AUC: It corresponds to the threshold of the percentage of the total predicted AUC (or AURC for urine models) due to the extrapolation to infinity after the last measurement. If activated, it will fill the value of Flag_AUC_%Extrap_pred: if AUC_%Extrap_pred < threshold, then Flag_AUC_%Extrap_pred=1. The default value is 20%.
  • Span: It corresponds to the threshold of the span. The span corresponds to the ratio of the sampling interval length for the \(\lambda_z\) calculation and the half-life. If activated, it will fill the value of Flag_Span: if Span > threshold, then Flag_Span=1. The default value is 3.




Global

  • Obs. ID to use: when a column OBSERVATION ID has defined several types of measurements in the data set, this setting permit to choose which one to use for the NCA analysis

3.5.Parameters summary #

This pages provides the information on the statistics used for the summary of the individual parameters for both NCA tasks and CA tasks. All the statistics are made with respect to the NOBS individuals where the parameter has value. For example, in the NCA task, the parameter \(\lambda_z\) might not be possible to compute. Thus, the value associated to these subjects will be missing and not taken into account in the statistics.

  • MIN: Minimum of the parameter over the NOBS individuals
  • Q1: First quartile of the parameter over the NOBS individuals
  • MEDIAN: Median of the parameter over the NOBS individuals
  • Q3: Third quartile of the parameter over the NOBS individuals
  • MAX: Maximum of the parameter over the NOBS individuals
  • MEAN: Mean of the parameter over the NOBS individuals
  • SD: Standard deviation of the parameter over the NOBS individuals
  • SE: Standard error of the parameter over the NOBS individuals
  • CV: Coefficient of variation of the parameter over the NOBS individuals
  • NTOT: Total number of individuals
  • NOBS: Number of individuals with a valid value
  • NMISS: Number of individuals with no valid value
  • GEOMEAN: Geometric mean of the parameter over the NOBS individuals
  • GEOSD: Geometric standard deviation of the parameter over the NOBS individuals

3.6.NCA parameters with respects to covariate #

Purpose

The figure displays the individual parameters as a function of the covariates. It allows to identify correlation effects between the individual parameters and the covariates.

Identifying correlation effects

In the example below, we can see the parameters Cl_obs and Cmax with respect to the covariates: the weight WT, the age AGE and the sex category.

Visual guidelines

In order to help identifying correlations, regression lines, spline interpolations and correlation coefficients can be overlaid on the plots for continuous covariates. Here we can see a strong correlation between the parameter Cl_obs and the age and the weight.

Highlight

Hovering on a point reveals the corresponding individual and, if multiple individual parameters have been simulated from the conditional distribution for each individual, highlights all the points points from the same individual. This is useful to identify possible outliers and subsequently check their behavior in the observed data.

Selection

It is possible to select a subset of covariates or parameters, as shown below. In the selection panel, a set of contiguous rows can be selected with a single extended click, or a set of non-contiguous rows can be selected with several clicks while holding the Ctrl key. This is useful when there are many parameters or covariates. 

Stratification

Stratification can be applied by creating groups of covariate values. As can be seen below, these groups can then be split, colored or filtered, allowing to check the effect of the covariate on the correlation between two parameters. The correlation coefficient is updated according to the split or filtering.

 Settings

  • General
    • Legend and grid : add/remove the legend or the grid. There is only one legend for all plots.
    • Information: display/hide the correlation coefficient associated with each scatter plot.
  • Display
    • Selection. The user can select some of the parameters or covariates to display only the corresponding plots. A simple click selects one parameter (or covariate), whereas multiple clicks while holding the Ctrl key selects a set of parameters.
    • Visual cues. Add/remove a regression line or a spline interpolation.

3.7.Distribution of the NCA parameters #

Purpose

This figure can be used to see the empirical distribution of the NCA parameters. Further analysis such as stratification by covariate  can be performed and will be detailed below.

PDF and CDF

It is possible to display the theoretical distribution and the histogram of the empirical distribution as proposed below.

The distributions are represented as histograms for the probability density function (PDF). Hovering on the histogram also reveals the density value of each bin as shown on the figure below

Cumulative distribution functions (CDF) is proposed too.

Example of stratification

It is possible to stratify the population by some covariate values and obtain the distributions of the individual parameters in each group. This can be useful to check covariate effect, in particular when the distribution of a parameter exhibits two or more peaks for the whole population. On the following example, the distribution of the parameter k from the same example as above has been split for two groups of individuals according to the value of the sex, allowing to visualize two clearly different distributions.

Settings

  • General: add/remove the legend, and the grid
  • Display
    • Distribution function: The user can choose to display either the probability density function (PDF) as histogram or the cumulative distribution function (CDF).

3.8.Correlation between NCA parameters #

Purpose

This plot displays scatter plots for each pair of parameters. It allows to identify correlations between parameters, which can be used to see the results of your analysis and see the coherence of the parameters for each individuals.

Example

In the following example, one can see pairs of parameters estimated for all parameters.

Visual guidelines

In addition to regression lines, correlation coefficients can been added to see the correlation between random effects, as well as spline interpolations.

Selection

It is possible to select a subset of parameters, whose pairs of correlations are then displayed, as shown below. In the selection panel, a set of contiguous rows can be selected with a single extended click, or a set of non-contiguous rows can be selected with several clicks while holding the Ctrl key.

Highlight

Similarly to other plots, hovering on a point provides information on the corresponding subject id, and highlights other points corresponding to the same individual.

Stratification: coloring and filtering

Stratification can be applied by creating groups of covariate values. As can be seen below, these groups can then be split, colored and/or filtered allowing to check the effect of the covariate on the correlation between two parameters. The correlation coefficient is updated according to the stratifying action. In the following case, We split by the covariate SEX and color bay 2 categories of AGE.

Settings

  • General
    • Legend and grid : add/remove the legend or the grid. There is only one legend for all plots.
    • Information: display/hide the correlation coefficient associated with each scatter plot.
  • Display
    • Selection. The user can select some of the parameters to display only the corresponding scatter plots. A simple click selects one parameter, whereas multiple clicks while holding the Ctrl key selects a set of parameters.
    • Visual cues. Add/remove the regression line or the spline interpolation.

4.Compartmental Analysis #

One of the main features of PKanalix is the calculation of the parameters in the Compartmental Analysis framework. It consists in finding parameters of a model representing the PK as the dynamics in compartments for each individual. It uses the Nelder-Mead algorithm.





CA task

There is a dedicated task in the “Tasks” frame as in the following figure.

This task contains two different parts.

  • The first one called “Run” corresponds to calculation button and the settings for the model and the calculation. The meaning of all the settings and their default is defined here.
  • The second one allows to visualize the predictions obtained with the initial values are displayed for each individual together with the data points as explained here). It is a is very useful initial estimates before the optimization.

CA results

When computing of the CA task is performed, it is possible to have the results in the “Results” frame. Two tables are proposed.

Compartmental analysis results per individual

Individual estimates of the CA parameters are displayed in the table in the tab  “INDIV. ESTIM.” part as in the following figure

All the computed parameters depend on the chosen model. Notice that on all tables, there is an icon on the top right to copy the table in a word or an excel file.

Statistics on compartmental analysis results

A summary table is also proposed in the tab “SUMMARY” as in the following figure


All the summary calculation is described here. Notice that on all tables, there is an icon on the top right to copy the table in a word or an excel file.

CA plots

In the “Plots” frame, numerous plot associated to the individual parameters are displayed.

  • Individual fits: The purpose of this plot is to display fit for each individual.
  • Correlation between CA parameters: The purpose of this plot is to display scatter plots for each pair of parameters. It allows to identify correlations between parameters, which can be used to see the results of your analysis and see the coherence of the parameters for each individuals.
  • Distribution of the CA parameters: The purpose of this plot is to see the empirical distribution of the parameters and thus have an idea of their distribution over the individuals.
  • CA parameters w.r.t. covariates: The purpose of this plot is to display the individual parameters as a function of the covariates. It allows to identify correlation effects between the individual parameters and the covariates.

CA outputs

After running the CA task, the following files are available in the result folder:

  • caSummary.txt contains the summary of the CA parameters calculation, in a format easily readable by a human (but not easy to parse for a computer)
  • caIndividualParametersSummary.txt contains the summary of the CA parameters in a friendly computer format.
    • The first column corresponds to the name of the parameters
    • The other columns correspond to the several elements describing the summary of the parameters (as explained here)
  • caIndividualParameters.txt contains the CA parameters for each subject-occasion along with the covariates.
    • The first line corresponds to the name of the parameters
    • The other lines correspond to the value of the parameters

The files caIndividualParametersSummary.txt and caIndividualParameters.txt can be exported in R for example using the following command

 read.table("/path/to/file.txt", sep = ",", header = T)

Remark

  • The separator is the one defined in the user preferences. We set “,” in this example as it is the one by default.

4.1.CA settings #

The following page describes all the settings for the parameters calculations.

Model

These settings corresponds to the model used for the individual fit of the data set. This model corresponds to a PK model from the MonolixSuite PK library.
The PK library includes model with different administration routes (bolus, infusion, first-order absorption, zero-order absorption, with or without Tlag), different number of compartments (1, 2 or 3 compartments), and different types of eliminations (linear or Michaelis-Menten). More details, including the full equations of each model, can be found on the dedicated page for the model libraries.
The PK library models can be used with single or multiple doses data, but they allow one type of administration in the data set (only oral or only bolus, but not some individuals with bolus and some with oral for instance).
When you click on “SELECT”, the list of available model files appear, as well as a menu to filter them. Use the filters and indications in the file name (parameters names) to select the model file you need.

Along with the selected model, you have the initial parameters to define. To evaluate graphically the impact of these parameters, you can go to the “Check Init.” tab.

Calculations settings





These settings corresponds to the settings impacting the calculation of the CA parameters.

  • Weighting: Type of weighting objective function. Possible methods are “uniform”, “Yobs”, “Ypred”, “Ypred2” or “Yobs2” (default value is “Yobs2”).
  • Pool fit: If FALSE (default), fit is with individual parameters or, if TRUE, with the same parameters for all individuals.
  • Method for BLQ: Method to replace the BLQ data. Possible options are: “zero”, “LOQ”, “LOQ2” or “missing” (default value is “missing”).

4.2.CA check initial parameters #

When clicking on the “Check init.”, the predictions obtained with the initial values are displayed for each individual together with the data points. This feature is very useful to find some “good” initial values. You can change the values of the parameters and see how the agreement with the data change. In addition, you can change the axis to log-scale and choose the same limit on all axis to have a better comparison of the individuals.

On the bottom (in the green box), you have all the parameters and can play with them and see directly the impact on the prediction (in red) for each individual. In addition, there is an “AUTO-INIT” button (on the blue block on the right). It allows to provide automatically good initial estimates of all the parameters as in the following example. To set the new parameters as initial values for the calculation, you need to click on “SET AS INITIAL VALUES”. This will bring you back to the settings for the CA parameters calculation.

 

4.3.CA individual fits #

Purpose

The figure displays the observed data for each subject, as well as prediction using the individual parameters.

Individual parameters

Information on individual parameters can be used in two ways, as shown below. By clicking on Information (marked in green on the figure) in the General panel, individual parameter values can be displayed on each individual plot. Moreover, the plots can be sorted according to the values for a given parameter, in ascending or descending order (Sorting panel marked in blue). By default, the individual plots are sorted by subject id, with the same order as in the data set.

Special zoom

User-defined constraints for the zoom are available. They allow to zoom in according to one axis only instead of both axes. Moreover, a link between plots can be set in order to perform a linked zoom on all individual plots at once. This is shown on the figure below with observations from the remifentanil example, and individual fits from a two-compartment model. It is thus possible to focus on the same time range or observation values for all individuals. In this example it is used to zoom on time on the elimination phase for all individuals, while keeping the Y axis in log scale unchanged for each plot.

  • Censored data

When a data is censored, this data is different to a “classical” observation and has thus a different representation. We represent it as a bar from the censored value specified in the data set and the associated limit.

Settings

  • Grid arrange. The user can define the number of subjects that are displayed, as well as the number of rows and the number of columns. Moreover, a slider is present to be able to change the subjects under consideration.
  • General
    • Legend: hide/show the legend. The legends adapts automatically to the elements displayed on the plot. The same legend box applies to all subplots and it is possible to drag and drop the legend at the desired place.
    • Grid : hide/show the grid in the background of the plots.
    • Information: hide/show the individual parameter values for each subject (conditional mode or conditional mean depending on the “Individual estimates” choice is the setting section “Display”).
    • Dosing times: hide/show dosing times as vertical lines for each subject.
    • Link between plots: activate the linked zoom for all subplots. The same zooming region can be applied on all individuals only on the x-axis, only on the Y-axis or on both (option “none”).
  • Display
    • Observed data: hide/show the observed data.
    • Censored intervals [if censored data present]: hide/show the data marked as censored (BLQ), shown as a rectangle representing the censoring interval (for instance [0, LOQ]).
    • Split occasions [if IOV present]: Split the individual subplots by occasions in case of IOV.
    • Number of points of the calculation for the prediction
  • Sorting: Sort the subjects by ID or individual parameter values in ascending or descending order.

By default, only the observed data and the individual fits are displayed.

4.4.CA parameters with respects to covariate #

Purpose

The figure displays the individual parameters as a function of the covariates. It allows to identify correlation effects between the individual parameters and the covariates.

Identifying correlation effects

In the example below, we can see the parameters Cl and V1 with respect to the covariates: the weight WT, the age AGE and the sex category.

Visual guidelines

In order to help identifying correlations, regression lines, spline interpolations and correlation coefficients can be overlaid on the plots for continuous covariates. Here we can see a strong correlation between the parameter Cl and the age and the weight. This is the same with V1.

Highlight

Hovering on a point reveals the corresponding individual and, if multiple individual parameters have been simulated from the conditional distribution for each individual, highlights all the points points from the same individual. This is useful to identify possible outliers and subsequently check their behavior in the observed data.

Selection

It is possible to select a subset of covariates or parameters, as shown below. In the selection panel, a set of contiguous rows can be selected with a single extended click, or a set of non-contiguous rows can be selected with several clicks while holding the Ctrl key. This is useful when there are many parameters or covariates.

Stratification

Stratification can be applied by creating groups of covariate values. As can be seen below, these groups can then be split, colored or filtered, allowing to check the effect of the covariate on the correlation between two parameters. The correlation coefficient is updated according to the split or filtering.

 Settings

  • General
    • Legend and grid : add/remove the legend or the grid. There is only one legend for all plots.
    • Information: display/hide the correlation coefficient associated with each scatter plot.
  • Display
    • Selection. The user can select some of the parameters or covariates to display only the corresponding plots. A simple click selects one parameter (or covariate), whereas multiple clicks while holding the Ctrl key selects a set of parameters.
    • Visual cues. Add/remove a regression line or a spline interpolation.

4.5.Distribution of the CA parameters #

Purpose

This figure can be used to see the empirical distribution of the CA parameters. Further analysis such as stratification by covariate can be performed and will be detailed below.

PDF and CDF

It is possible to display the theoretical distribution and the histogram of the empirical distribution as proposed below.

The distributions are represented as histograms for the probability density function (PDF). Hovering on the histogram also reveals the density value of each bin as shown on the figure below

Cumulative distribution functions (CDF) is proposed too.

Example of stratification

It is possible to stratify the population by some covariate values and obtain the distributions of the individual parameters in each group. This can be useful to check covariate effect, in particular when the distribution of a parameter exhibits two or more peaks for the whole population. On the following example, the distribution of the parameter k from the same example as above has been split for two groups of individuals according to the value of the SEX, allowing to visualize two clearly different distributions.

Settings

  • General: add/remove the legend, and the grid
  • Display
    • Distribution function: The user can choose to display either the probability density function (PDF) as histogram or the cumulative distribution function (CDF).

4.6.Correlation between CA parameters #

Purpose

This plot displays scatter plots for each pair of parameters. It allows to identify correlations between parameters, which can be used to see the results of your analysis and see the coherence of the parameters for each individuals.

Example

In the following example, one can see pairs of parameters estimated for all parameters.

Visual guidelines

In addition to regression lines, correlation coefficients can been added to see the correlation between random effects, as well as spline interpolations.

Selection

It is possible to select a subset of parameters, whose pairs of correlations are then displayed, as shown below. In the selection panel, a set of contiguous rows can be selected with a single extended click, or a set of non-contiguous rows can be selected with several clicks while holding the Ctrl key.

Highlight

Similarly to other plots, hovering on a point provides information on the corresponding subject id, and highlights other points corresponding to the same individual.

Stratification: coloring and filtering

Stratification can be applied by creating groups of covariate values. As can be seen below, these groups can then be split, colored and/or filtered allowing to check the effect of the covariate on the correlation between two parameters. The correlation coefficient is updated according to the stratifying action. In the following case, We split by the covariate SEX and color bay 2 categories of AGE.

Settings

  • General
    • Legend and grid : add/remove the legend or the grid. There is only one legend for all plots.
    • Information: display/hide the correlation coefficient associated with each scatter plot.
  • Display
    • Selection. The user can select some of the parameters to display only the corresponding scatter plots. A simple click selects one parameter, whereas multiple clicks while holding the Ctrl key selects a set of parameters.
    • Visual cues. Add/remove the regression line or the spline interpolation.

5.R functions to run PKanalix #

On the use of a R-functions

PKanalix can be called via R-functions. It is possible to have access to the project exactly in the same way as you would do with the interface.  All the functions are described below.





Installation and initialization

All the installation guidelines and initialization procedure can be found here.

Description of the functions concerning the project management

  • getData: Get a description of the data used in the current project.
  • getStructuralModel: Get the model file for the structural model used in the current project (CA analysis).
  • loadProject: Load a project by parsing the mlxtran-formated file whose path has been given as an input.
  • newProject: Create a new empty project providing model and data specification.
  • saveProject: Save the current project as an Mlxtran-formated file.
  • setData: Set project data giving a data file and specifying headers and observations types.
  • setStructuralModel: Set the structural model.

Description of the functions concerning the scenario

  • runCAEstimation: Estimate the CA parameters for each individual of the project.
  • runEstimation: Run the NCA analysis and the CA analysis if the structural model for the CA calculation is well defined.
  • runNCAEstimation: Estimate the NCA parameters for each individual of the project.
  • abort: Stop the current task run.
  • getLastRunStatus: Return an execution report about the last run with a summary of the error which could have occurred.
  • isRunning: Check if a scenario is currently running.

Description of the functions concerning the results

  • getCAIndividualParameters: Get the estimated values for each subject of some of the individual CA parameters of the current project.
  • getNCAIndividualParameters: Get the estimated values for each subject of some of the individual NCA parameters of the current project.

Description of the functions concerning the data

Description of the functions concerning the compartmental and non compartmental analysis settings

  • getCASettings: Get the settings associated to the compartmental analysis.
  • getDataSettings: Get the data settings associated to the non compartmental analysis.
  • getGlobalObsIdToUse: Get the global observation id used in both the compartmental and non compartmental analysis.
  • getNCASettings: Get the settings associated to the non compartmental analysis.
  • setCASettings: Get the settings associated to the compartmental analysis.
  • setDataSettings: Set the value of one or several of the data settings associated to the non compartmental analysis.
  • setGlobalObsIdToUse: Get the global observation id used in both the compartmental and non compartmental analysis.
  • setNCASettings: Set the value of one or several of the settings associated to the non compartmental analysis.

Description of the functions concerning preferences and project settings

Example

Below is an example of the functions to call to run an NCA and CA analysis from scratch using one of the demo data sets.

# load library and initialize the API
library(lixoftConnectors)
initializeLixoftConnectors(software="pkanalix")

# create a new project by setting a data set
# replace <userFolder> by the path to your home directory
demoPath = '<userFolder>/lixoft/pkanalix/pkanalix2019R1/demos/2.case_studies/data/'
newProject(data = list(dataFile = paste0(demoPath,'M2000_ivbolus_singledose.csv'),
                       headerTypes = c('id','time','amount','observation',
                                       'catcov','contcov','contcov','contcov'),
                       observationTypes = 'continuous'))

# set the options for the NCA analysis
setNCASettings(administrationtype = list("1"="intravascular"),
               integralMethod = "LinLogTrapLinLogInterp", 
               lambdaRule="adjustedR2")

# run the NCA analysis
runNCAEstimation()

# retrieve the output of interest
indivParams <- getNCAIndividualParameters("AUCINF_pred","Cmax")

The estimated NCA parameters can then be further analyzed using typical R functions, and plotted.

5.1.R-package installation and initialization #

In this page, we present the installation procedure of the R-package lixoftConnectors that allow to run PKanalix from R.

Installation

The R package lixoftConnectors is located in the installation directory as tar.gz ball. It can be installed directly using Rstudio (Tools > Install packages > from package archive file) or by the following R command:

install.packages(packagePath, repos = NULL, type="source", INSTALL_opts ="--no-multiarch")

with packagePath = ‘<installDirectory>/connectors/lixoftConnectors.tar.gz’ where <installDirectory> is the MonolixSuite installation directory.

With the default installation directory, the command is:

# for Windows OS
install.packages("C:/ProgramData/Lixoft/MonolixSuite2019R1/connectors/lixoftConnectors.tar.gz", 
                 repos = NULL, type="source", INSTALL_opts ="--no-multiarch")
# for Mac OS
install.packages("/Applications/MonolixSuite2019R1.app/Contents/Resources/monolixSuite/connectors/lixoftConnectors.tar.gz",
                 repos = NULL, type="source", INSTALL_opts ="--no-multiarch")

The lixoftConnectors package depends on the RJSONIO package that may need to be installed from CRAN first using:

install.packages('RJSONIO')

Initializing

When starting a new R session, you need to load the library and initialize the connectors with the following commands

library(lixoftConnectors)
initializeLixoftConnectors(software = "pkanalix")

In some cases, it may be necessary to specify the path to the installation directory of the Lixoft suite. If no path is given, the one written in the <user home>/lixoft/lixoft.ini file is used (usually “C:/ProgramData/Lixoft/MonolixSuiteXXXX” for Windows) where XXXX corresponds to the version of MonolixSuite.

library(lixoftConnectors) 
initializeLixoftConnectors(software = "pkanalix", path = "/path/to/MonolixSuite/")

 

Making sure the installation is ok

To test if the installation is ok, you can load and run a project from the demos as on the following:

demoPath = '<userFolder>/lixoft/pkanalix/pkanalix2019R1/demos/1.basic_examples/'
loadProject(paste0(demoPath ,'project_ivbolus.pkx'))
runNCAEstimation()
getNCAIndividualParameters()

where <userFolder> is the user’s home folder (on windows C:/Users/toto if toto is your username). These three commands should output the estimated NCA parameters.

5.2.Description of the R functions associated to PKanalix project's management #

getData Get a description of the data used in the current project.
getStructuralModel Get the model file for the structural model used in the current project.
loadProject Load a project by parsing the mlxtran-formated file whose path has been given as an input.
newProject Create a new empty project providing model and data specification.
saveProject Save the current project as an Mlxtran-formated file.
setData Set project data giving a data file and specifying headers and observations types.
setStructuralModel Set the structural model.

Get project data

Description

Get a description of the data used in the current project. Available informations are:

  • dataFile (string): path to the data file
  • header (array<character>): vector of header names
  • headerTypes (array<character>): vector of header types
  • observationNames (vector<string>): vector of observation names
  • observationTypes (vector<string>): vector of observation types
  • nbSSDoses (int) : number of doses (if there is a SS column)

Usage

getData()

Value

A list describing project data.

See Also

setData

Click here to see examples

## Not run:

data = getData()

data

-> $dataFile

“/path/to/data/file.txt”

$header

c(“ID”,”TIME”,”CONC”,”SEX”,”OCC”)

$headerTypes

c(“ID”,”TIME”,”OBSERVATION”,”CATEGORICAL COVARIATE”,”IGNORE”)

$observationNames

c(“concentration”)

$observationTypes

c(concentration = “continuous”)

## End(Not run)


Top of the page, PKanalix-R functions.


Get structural model file

Description

Get the model file for the structural model used in the current project.

Usage

getStructuralModel()

Value

A string corresponding to the path to the structural model file.

See Also

setStructuralModel

Click here to see examples

## Not run:

getStructuralModel() => “/path/to/model/inclusion/modelFile.txt”

## End(Not run)


Top of the page, PKanalix-R functions.


Load project from file

Description

Load a project by parsing the mlxtran-formated file whose path has been given as an input.
WARNING: R is sensitive between ‘\’ and ‘/’, only ‘/’ can be used

Usage

loadProject(projectFile)

Arguments

projectFile
(character) Path to the project file. Can be absolute or relative to the current working directory.

See Also

saveProject

Click here to see examples

## Not run:

loadProject(“/path/to/project/file.mlxtran”) for Linux platform

loadProject(“C:/Users/path/to/project/file.mlxtran”) for Windows platform

## End(Not run)


Top of the page, PKanalix-R functions.


Create new project

Description

Create a new empty project providing model and data specification. The data specification is:

  • dataFile (string): path to the data file
  • headerTypes (array<character>): vector of headers
  • observationTypes (list): a list giving the type of each observation present in the data file (if there is only one y-type, the corresponding observation name can be omitted)
  • nbSSDoses (int): number of steady-state doses (if there is a SS column)

Please refer to setData documentation for a comprehensive description of the “data” argument structure.

Usage

newProject(modelFile = NULL, data)

Arguments

modelFile
(character) Path to the model file. Can be absolute or relative to the current working directory.

data
(list) Structure describing the data.

See Also

newProject saveProject

Click here to see examples

## Not run:

newProject(data = list(dataFile = “/path/to/data/file.txt”,

headerTypes = c(“IGNORE”,”OBSERVATION”),

observationTypes = “continuous”),

modelFile = “/path/to/model/file.txt”)

## End(Not run)


Top of the page, PKanalix-R functions.


Save current project

Description

Save the current project as an Mlxtran-formated file.

Usage

saveProject(projectFile = "")

Arguments

projectFile
newProject loadProject

Click here to see examples

## Not run:

saveProject(“/path/to/project/file.mlxtran”) # save a copy of the model

saveProject() # update current model

## End(Not run)


Top of the page, PKanalix-R functions.


Set project data

Description

Set project data giving a data file and specifying headers and observations types.

Usage

setData(dataFile, headerTypes, observationTypes, nbSSDoses = NULL)

Arguments

  • dataFile (character): Path to the data file. Can be absolute or relative to the current working directory.
  • headerTypes (array<character>): A collection of header types. The possible header types are: “ignore”, “id”, “time”, “observation”, “amount”, “contcov”, “catcov”, “occ”, “evid”, “mdv”, “obsid”, “cens”, “limit”, “regressor”,”admid”, “rate”, “tinf”, “ss”, “ii”, “addl”, “date”. Notice that these are not the types displayed in the interface, these one are shortcuts.
  • observationTypes (list): A list giving the type of each observation present in the data file. If there is only one y-type, the corresponding observation name can be omitted. The possible observation types are “continuous”, “discrete”, and “event”.
  • nbSSDoses [optional](int): Number of doses (if there is a SS column).

See Also

getData

Click here to see examples

## Not run:

setData(dataFile = “/path/to/data/file.txt”, headerTypes = c(“IGNORE”,”OBSERVATION”), observationTypes = “continuous”)

setData(dataFile = “/path/to/data/file.txt”, headerTypes = c(“IGNORE”,”OBSERVATION”,”YTYPE”), observationTypes = list(Concentration = “continuous”, Level = “discrete”))

## End(Not run)


Top of the page, PKanalix-R functions.


Set structural model file

Description

Set the structural model.
NOTE: In case of PKanalix, the user can only use a structural model from the library for the CA analysis. Thus, the structura model should be written ‘lib:modelFromLibrary.txt’.

Usage

setStructuralModel(modelFile)

Arguments

modelFile
(character) Path to the model file. Can be absolute or relative to the current working directory.

See Also

getStructuralModel

Click here to see examples

## Not run:

setStructuralModel(“/path/to/model/file.txt”) # for Monolix

setStructuralModel(“‘lib:oral1_2cpt_kaClV1QV2.txt'”) # for PKanalix

## End(Not run)


Top of the page, PKanalix-R functions.

5.3.Description of the R functions associated to PKanalix scenario #

runCAEstimation Estimate the CA parameters for each individual of the project.
runEstimation Run the NCA analysis and the CA analysis if the structural model for the CA calculation is well defined.
runNCAEstimation Estimate the NCA parameters for each individual of the project.
abort Stop the current task run.
getLastRunStatus Return an execution report about the last run with a summary of the error which could have occurred.
isRunning Check if a scenario is currently running.

Estimate the individual parameters using compartmental analysis.

Description

Estimate the CA parameters for each individual of the project.

Usage

runCAEstimation()

Click here to see examples

## Not run:

runCAEstimation()

## End(Not run)


Top of the page, PKanalix-R functions.


Run both non compartmental and compartmental analysis.

Description

Run the non compartmental analysis and the compartmental analysis if the structural model for the CA calculation is defined.

Usage

runEstimation()

Click here to see examples

## Not run:

runEstimation()

## End(Not run)


Top of the page, PKanalix-R functions.


Estimate the individual parameters using non compartmental analysis.

Description

Estimate the NCA parameters for each individual of the project.

Usage

runNCAEstimation()

Click here to see examples

## Not run:

runNCAEstimation()

## End(Not run)


Top of the page, PKanalix-R functions.

Stop the current task run

Description

Stop the current task run.

Usage

abort()

See Also

runScenario

Click here to see examples

## Not run:

abort()

## End(Not run)


Top of the page, PKanalix-R functions.


Get last run status

Description

Return an execution report about the last run with a summary of the error which could have occurred.

Usage

getLastRunStatus()

Value

A structure containing

  1. a boolean which equals TRUE if the last run has successfully completed,
  2. a summary of the errors which could have occurred.

See Also

runScenario abort isRunning

Click here to see examples

## Not run:

lastRunInfo = getLastRunStatus()

lastRunInfo$status

-> TRUE

lastRunInfo$report

-> “”

## End(Not run)


Top of the page, PKanalix-R functions.


Get current scenario state

Description

Check if a scenario is currently running. If yes, information about the current running task are displayed.

Usage

isRunning(verbose = FALSE)

Arguments

verbose
(bool) Should information about the current running task be displayed in the console or not. Equals FALSE by default.

Value

A boolean which equals TRUE if a scenario is currently running.

See Also

runScenario abort

Click here to see examples

## Not run:

isRunning()

## End(Not run)


Top of the page, PKanalix-R functions.

5.4.Description of the R functions associated to PKanalix results #

getCAIndividualParameters Get the estimated values for each subject of some of the individual CA parameters of the current project.
getNCAIndividualParameters Get the estimated values for each subject of some of the individual NCA parameters of the current project.

Get CA individual parameters

Description

Get the estimated values for each subject of some of the individual CA parameters of the current project.

Usage

getCAIndividualParameters(...)

Arguments


(string) Name of the individual parameters whose values must be displayed.

Value

A data frame giving the estimated values of the individual parameters of interest for each subject,
and a list of their associated statistics.

Click here to see examples

## Not run:

indivParams = getCAIndividualParameters() # retrieve the values of all the available individual parameters.

indivParams = getCAIndividualParameters(“ka”, “V”) # retrieve the values of the individual parameters (“ka” ,”V”)

$parameters->

id ka V

1 0.8 1.2

. … …

N 0.4 2.2

## End(Not run)


Top of the page, PKanalix-R functions.


Get NCA individual parameters

Description

Get the estimated values for each subject of some of the individual NCA parameters of the current project.

Usage

getNCAIndividualParameters(...)

Arguments


(string) Name of the individual parameters whose values must be displayed.

Value

A data frame giving the estimated values of the individual parameters of interest for each subject,
and a list of their associated statistics.

Click here to see examples

## Not run:

indivParams = getNCAIndividualParameters() # retrieve the values of all the available parameters.

indivParams = getNCAIndividualParameters(“Tmax”,”Clast”) # retrieve the values of the parameters (“Tmax” ,”Clast”)

$parameters->

id Tmax Clast

1 0.8 1.2

. … …

N 0.4 2.2

## End(Not run)

Top of the page, PKanalix-R functions.

5.5.Description of the R functions associated to the data set #

getObservationInformation Get the name, the type and the values of the observations present in the project.
getCovariateInformation Get the name, the type and the values of the covariates present in the project.

Get observations information

Description

Get the name and the values of the observations present in the project.

Usage

getObservationInformation()

Value

A list containing the name of the observations, their type and their values (id, time and observationName (and occasion if present in the data set)).

Click here to see examples

## Not run:

info = getObservationInformation()

info

-> $name

c(“concentration”)

-> $concentration

id time concentration

1 0.5 0.0

. . .

N 9.0 10.8

## End(Not run)


Top of the page, PKanalix-R functions.


Get covariates information

Description

Get the name, the type and the values of the covariates present in the project.

Usage

getCovariateInformation()

Value

A list containing the following fields :

  • name : (vector<string>) covariate names
  • type : (vector<string>) covariate types. Existing types are “continuous”, “continuoustransformed”, “categorical”, “categoricaltransformed” .
  • modalityNumber : (vector<int>) number of modalities (for latent covariates only)
  • covariate : a data frame giving the values of continuous and categorical covariates for each subject.

Click here to see examples

## Not run:

info = getCovariateInformation()

info

-> $name

c(“sex”,”wt”)

-> $type

c(sex = “categorical”, wt = “continuous”)

-> $modalityNumber

c(lcat = 2)

-> $covariate

id sex wt

1 M 66.7

. . .

N F 59.0

## End(Not run)

)
Top of the page, PKanalix-R functions.

5.6.Description of the R functions associated to PKanalix settings #

getCASettings Get the settings associated to the compartmental analysis.
getDataSettings Get the data settings associated to the non compartmental analysis.
getGlobalObsIdToUse Get the global observation id used in both the compartmental and non compartmental analysis.
getNCASettings Get the settings associated to the non compartmental analysis.
setCASettings Get the settings associated to the compartmental analysis.
setDataSettings Set the value of one or several of the data settings associated to the non compartmental analysis.
setGlobalObsIdToUse Get the global observation id used in both the compartmental and non compartmental analysis.
setNCASettings Set the value of one or several of the settings associated to the non compartmental analysis.

Get the settings associated to the compartmental analysis

Description

Get the settings associated to the compartmental analysis. Associated settings are:

“weightingCA” (string) Type of weighting objective function.
“pool” (logical) Fit with individual parameters or with the same parameters for all individuals.
“initialValues” (list) list(param = value, …) value = initial value of individual parameter param.
“blqMethod” (string) Method by which the BLQ data should be replaced.

Usage

getCASettings(...)

Arguments


[optional] (string) Name of the settings whose value should be displayed. If no argument is provided, all the settings are returned.

Value

An array which associates each setting name to its current value.

See Also

setCASettings

Click here to see examples

## Not run:

getCASettings() # retrieve a list of all the CA methodology settings

getCASettings(“weightingca”,”blqmethod”) # retrieve a list containing only the value of the settings whose name has been passed in argument

## End(Not run)


Top of the page, PKanalix-R functions.


Get the data settings associated to the non compartmental analysis

Description

Get the data settings associated to the non compartmental analysis. Associated settings are:

“urinevolume” (string) regressor name used as urine volume.
“datatype” (list) list(“obsId” = string(“plasma” or “urine”). The type of data associated with each obsId: observation ID from data set.

Usage

getDataSettings(...)

Arguments


[optional] (string) Name of the settings whose value should be displayed. If no argument is provided, all the settings are returned.

Value

An array which associates each setting name to its current value.

See Also

setNCASettings

Click here to see examples

## Not run:

getDataSettings() # retrieve a list of all the NCA methodology settings

getDataSettings(“urinevolume”) # retrieve a list containing only the value of the settings whose name has been passed in argument

## End(Not run)


Top of the page, PKanalix-R functions.


Get the global observation id used in both the compartmental and non compartmental analysis

Description

Get the global observation id used in both the compartmental and non compartmental analysis.

Usage

getGlobalObsIdToUse(...)

Value

the observation id used in computations.

See Also

setGlobalObsIdToUse

Click here to see examples

## Not run:

getGlobalObsIdToUse() #

## End(Not run)


Top of the page, PKanalix-R functions.


Get the settings associated to the non compartmental analysis

Description

Get the settings associated to the non compartmental analysis. Associated settings are:

“administrationType” (list) list(key = “admId”, value = string(“intravenous” or “extravascular”)). admId Admninistration ID from data set or 1 if no admId column in the dataset.
“integralMethod” (string) Method for AUC and AUMC calculation and interpolation. “linTrapLinInterp” = Linear trapezoidal linear, “linLogTrapLinLogInterp” = Linear log trapezoidal, “upDownTrapUpDownInterp” = Linear up log down, “linTrapLinLogInterp” = Linear trapezoidal linear/log.
“partialAucTime” (list) The first element of the list is a bolean describing if this setting is used. The second element of the list is the value of the bounds of the partial AUC calculation interval.
“blqMethodBeforeTmax” (string) Method by which the BLQ data before Tmax should be replaced.
“blqMethodAfterTmax” (string) Method by which the BLQ data after Tmax should be replaced.
“ajdr2AcceptanceCriteria” (list) The first element of the list is a boolean describing if this setting is used. The second element of the list is the value of the adjusted R2 acceptance criteria for the estimation of lambda_Z.
“extrapAucAcceptanceCriteria” (list) The first element of the list is a boolean describing if this setting is used. The second element of the list is the value of the AUC extrapolation acceptance criteria for the estimation of lambda_Z.
“spanAcceptanceCriteria” (list) The first element of the list is a boolean describing if this setting is used. The second element of the list is the value of the span acceptance criteria for the estimation of lambda_Z.
“lambdaRule” (string) Main rule for the lambda_Z estimation.
“timeInterval” (vector) Time interval for the lambda_Z estimation when “lambdaRule” = “interval”.
“timeValuesPerId” (list) list(“idName” = idTimes,…): idTimes Observation times to use for the calculation of lambda_Z for the id idName.
“nbPoints” (integer) Number of points for the lambda_Z estimation when “lambdaRule” = “points”.
“maxNbOfPoints” (list) The first element of the list is a boolean describing if this setting is used. The second element of the list is the value maximum number of points to use for the lambda_Z estimation when “lambdaRule” = “R2” or “adjustedR2”.
“startTimeNotBefore” (list) The first element of the list is a boolean describing if this setting is used. The second element of the list is the value minimum time value to use for the lambda_Z estimation when “lambdaRule” = “R2” or “adjustedR2”.
“weightingNCA” (string) Weighting method used for the regression that estimates lambda_Z.

Usage

getNCASettings(...)

Arguments


[optional] (string) Name of the settings whose value should be displayed. If no argument is provided, all the settings are returned.

Value

An array which associates each setting name to its current value.

See Also

setNCASettings

Click here to see examples

## Not run:

getNCASettings() # retrieve a list of all the NCA methodology settings

getNCASettings(“lambdaRule”,”integralMethod”) # retrieve a list containing only the value of the settings whose name has been passed in argument

## End(Not run)


Top of the page, PKanalix-R functions.


Set the settings associated to the compartmental analysis

Description

Get the settings associated to the compartmental analysis. Associated settings names are:

“weightingCA” (string) Type of weighting objective function. Possible methods are “uniform”, “Yobs”, “Ypred”, “Ypred2” or “Yobs2” (default).
“pool” (logical) If TRUE, fit with individual parameters or with the same parameters for all individuals if FALSE.
FALSE (default).
“initialValues” (list) list(param = value, …) value = initial value of individual parameter param.
“blqMethod” (string) Method by which the BLQ data should be replaced. Possible methods are “zero”, “LOQ”, “LOQ2” or “missing” (default).

Usage

setCASettings(...)

Arguments


A collection of comma-separated pairs {settingName = settingValue}.

See Also

getCASettings

Click here to see examples

## Not run:

setCASettings(weightingCA = “uniform”, blqMethod = “zero”) # set the settings whose name has been passed in argument

setCASettings(initialValues = list(Cl=0.4, V=.5, ka=0.04) # set the paramters CL, V, and ka to .4, .5 and .04 respectively

## End(Not run)


Top of the page, PKanalix-R functions.


Set the value of one or several of the data settings associated to the non compartmental analysis

Description

Set the value of one or several of the data settings associated to the non compartmental analysis. Associated settings names are:

“urinevolume” (string) regressor name used as urine volume.
“datatype” (list) list(“obsId” = string(“plasma” or “urine”). The type of data associated with each obsId. Default “plasma”.

Usage

setDataSettings(...)

Arguments


A collection of comma-separated pairs {settingName = settingValue}.

See Also

getDataSettings

Click here to see examples

## Not run:

setDataSettings(“datatype” = list(“Y” =”plasma”)) # set the settings whose name has been passed in argument

## End(Not run)


Top of the page, PKanalix-R functions.


Set the global observation id used in both the compartmental and non compartmental analysis

Description

Get the global observation id used in both the compartmental and non compartmental analysis.

Usage

setGlobalObsIdToUse(...)

Arguments


(“id” string) the observation id from data section to use for computations.

See Also

getGlobalObsIdToUse

Click here to see examples

## Not run:

setGlobalObsIdToUse(“id”) #

## End(Not run)


Top of the page, PKanalix-R functions.


Set the value of one or several of the settings associated to the non compartmental analysis

Description

Set the value of one or several of the settings associated to the non compartmental analysis. Associated settings are:

“administrationType” (list) list(key = “admId”, value = string(“intravenous” or “extravascular”)). admId Admninistration ID from data set or 1 if no admId column in the dataset.
“integralMethod” (string) Method for AUC and AUMC calculation and interpolation. “linTrapLinInterp” = Linear trapezoidal linear, “linLogTrapLinLogInterp” = Linear log trapezoidal, “upDownTrapUpDownInterp” = Linear up log down, “linTrapLinLogInterp” = Linear trapezoidal linear/log.
“partialAucTime” (list) The first element of the list is a bolean describing if this setting is used. The second element of the list is the value of the bounds of the partial AUC calculation interval. By default, the boolean equals FALSE and the bounds are c(-Inf, +Inf).
“blqMethodBeforeTmax” (string) Method by which the BLQ data before Tmax should be replaced. Possible methods are “missing”, “LOQ”, “LOQ2” or “zero” (default).
“blqMethodAfterTmax” (string) Method by which the BLQ data after Tmax should be replaced. Possible methods are “zero”, “missing”, “LOQ” or “LOQ2” (default).
“ajdr2AcceptanceCriteria” (list) The first element of the list is a boolean describing if this setting is used. The second element of the list is the value of the adjusted R2 acceptance criteria for the estimation of lambda_Z. By default, the boolean equals FALSE and the value is 0.98.
“extrapAucAcceptanceCriteria” (list) The first element of the list is a boolean describing if this setting is used. The second element of the list is the value of the AUC extrapolation acceptance criteria for the estimation of lambda_Z. By default, the boolean equals FALSE and the value is 20.
“spanAcceptanceCriteria” (list) The first element of the list is a boolean describing if this setting is used. The second element of the list is the value of the span acceptance criteria for the estimation of lambda_Z. By default, the boolean equals FALSE and the value is 3.
“lambdaRule” (string) Main rule for the lambda_Z estimation. Possible rules are “R2”, “interval”, “points” or “adjustedR2” (default).
“timeInterval” (vector) Time interval for the lambda_Z estimation when “lambdaRule” = “interval”. This is a vector of size two, default = c(-inf, inf)
“timeValuesPerId” (list) list(“idName” = idTimes,…): idTimes Observation times to use for the calculation of lambda_Z
for the id idName. Default = NULL, all the times values are used.
“nbPoints” (integer) Number of points for the lambda_Z estimation when “lambdaRule” = “points”. Default = 3.
“maxNbOfPoints” (list) The first element of the list is a boolean describing if this setting is used. The second element of the list is the value maximum number of points to use for the lambda_Z estimation when “lambdaRule” = “R2” or “adjustedR2”. By default, the boolean equals FALSE and the value is inf.
“startTimeNotBefore” (list) The first element of the list is a boolean describing if this setting is used. The second element of the list is the value minimum time value to use for the lambda_Z estimation when “lambdaRule” = “R2” or “adjustedR2”. By default, the boolean equals FALSE and the value is 0.
“weightingNca” (string) Weighting method used for the regression that estimates lambda_Z. Possible methods are “Y”, “Y2” or “uniform” (default).

Usage

setNCASettings(...)

Arguments


A collection of comma-separated pairs {settingName = settingValue}.

See Also

getNCASettings

Click here to see examples

## Not run:

setNCASettings(integralMethod = “LinLogTrapLinLogInterp”, weightingnca = “uniform”) # set the settings whose name has been passed in argument

setNCASettings(administrationType = list(“1″=”extravascular”)) # set the administration id “1” to extravascular

setNCASettings(startTimeNotBefore = list(TRUE, 15)) # set the estimation of the lambda_z with points with time over 15

setNCASettings(timeValuesPerId = list(‘1’=c(4, 6, 8, 30), ‘4’=c(8, 12, 18, 24, 30))) # set the points to use for the lambda_z to time={4, 6, 8, 30} for id ‘1’ and ime={8, 12, 18, 24, 30} for id ‘4’

setNCASettings(timeValuesPerId = NULL) # set the points to use for the lambda_z to the default rule

## End(Not run)


Top of the page, PKanalix-R functions.

5.7.Description of the R functions associated to PKanalix preferences and project settings #

getPreferences Get a summary of the project preferences.
getProjectSettings Get a summary of the project settings.
setPreferences Set the value of one or several of the project preferences.
setProjectSettings Set the value of one or several of the settings of the project.

Get project preferences

Description

Get a summary of the project preferences. Preferences are:

“relativePath” (bool) Use relative path for save/load operations.
“threads” (int >0) Number of threads.
“timeStamping” (bool)) Create an archive containing result files after each run.
“dpi” (bool)) Apply high density pixel correction.
“imageFormat” (string)) Image format used to save pkanalix graphics.
“delimiter” (string)) Character use as delimiter in exported result files.
“exportCharts” (bool)) Should graphics images be exported.
“exportChartsData” (bool)) Should graphics data be exported.

Usage

getPreferences(...)

Arguments


[optional] (string) Name of the preference whose value should be displayed. If no argument is provided, all the preferences are returned.

Value

An array which associates each preference name to its current value.

See Also

setGeneralSettings

Click here to see examples

## Not run:

getPreferences() # retrieve a list of all the general settings

getPreferences(“imageFormat”,”exportCharts”) # retrieve a list containing only the value of the preferences whose name has been passed in argument

## End(Not run)


Top of the page, PKanalix-R functions.


Get project settings

Description

Get a summary of the project settings. Associated settings are:

“directory” (string) Path to the folder where simulation results will be saved. It should be a writable directory.
“dataAndModelNextToProject” (bool)/td> Should data and model files be saved next to project.
“grid” (int)/td> Number of points for the continuous simulation grid.

Usage

getProjectSettings(...)

Arguments


[optional] (string) Name of the settings whose value should be displayed. If no argument is provided, all the settings are returned.

Value

An array which associates each setting name to its current value.

See Also

getProjectSettings

Click here to see examples

## Not run:

getProjectSettings() # retrieve a list of all the project settings

getProjectSettings(“directory”, “seed”) # retrieve a list containing only the value of the settings whose name has been passed in argument

## End(Not run)


Top of the page, PKanalix-R functions.


Set preferences

Description

Set the value of one or several of the project preferences. Preferences are:

“relativePath” (bool) Use relative path for save/load operations.
“threads” (int >0) Number of threads.
“timeStamping” (bool) Create an archive containing result files after each run.
“dpi” (bool) Apply high density pixel correction.
“imageFormat” (string) Image format used to save pkanalix graphics.
“delimiter” (string) Character use as delimiter in exported result files.
“exportCharts” (bool) Should graphics images be exported.
“exportChartsData” (bool) Should graphics data be exported.

Usage

setPreferences(...)

Arguments


A collection of comma-separated pairs {preferenceName = settingValue}.

See Also

getPreferences

Click here to see examples

## Not run:

setPreferences(exportCharts = FALSE, delimiter = “,”)

## End(Not run)


Top of the page, PKanalix-R functions.


Set project settings

Description

Set the value of one or several of the settings of the project. Associated settings are:

“directory” (string) Path to the folder where simulation results will be saved. It should be a writable directory.
“dataAndModelNextToProject” (bool) Should data and model files be saved next to project.
“grid” (int) Number of points for the continuous simulation grid.

Usage

setProjectSettings(...)

Arguments


A collection of comma-separated pairs {settingName = settingValue}.

See Also

getProjectSettings

Click here to see examples

## Not run:

setProjectSettings(directory = “/path/to/export/directory”)

## End(Not run)


Top of the page, PKanalix-R functions.

6.FAQ #

This page summarizes the frequent questions about PKanalix.

Resolution and display

  • OpenGL technology impact on remote access: the PKanalix interface uses OpenGL technology. Unfortunately, remote access using direct rendering is not compatible with OpenGL, as the OpenGL application sends instructions directly to the local hardware bypassing the target X server. As a consequence, PKanalix cannot be used with X11 forwarding. Instead, an indirect rendering should be used, where the remote application sends instructions to the X server which transfers them to the graphics card. It is possible to do that with ssh application, but it requires a dedicated configuration depending on the machine and the operating system. Other applications such as VNC or Remina can also be used for an indirect rendering.
  • If the graphical user interface appears with too high or too low resolution, follow these steps:
    • open PKanalix
    • load any project from the demos
    • in the menu, go to Settings > Preferences and disable the “High dpi scaling” in the Options.
    • close PKanalix
    • restart PKanalix

Regulatory

  • Are NCA and CA analyses done with PKanalix accepted by the regulatory agencies like the FDA and EMA?  Yes.
  • How to cite PKanalix? Please reference it as here
    PKanalix version 2020R1. Antony, France: Lixoft SAS, 2020.
    http://lixoft.com/products/PKanalix/

Running PKanalix

  • On what operating systems does PKanalix run? PKanalix runs on Windows, Linux and MacOS platform.
  • Is it possible to run PKanalix in command line? It is possible to run PKanalix from the R command line. A full R -api providing the full flexibility on running and modifying PKanalix projects is described here

Input data

  • Does PKanalix support sparse data? No.
  • Does PKanalix support drug-effect or PD models? No.
  • What type of data can PKanalix handle? Extravascular, intravascular infusion, intravascular bolus for single-dose or steady-state plasma concentration and single-dose urine data can be used. See here.
  • Can I give the concentration data and dosing data as separate files? No.
  • Can I give the dosing information directly via the interface? No.
  • Can I have BLQ data? Yes, see here.
  • Can I define units? No.
  • Can I define variables such as “Sort” and “Carry”? Yes, check here.
  • It is possible to define several dosing routes within a single data set? Yes, check the ADMINISTRATION ID column.
  • Can I use dose normalization to scale the dose by a factor? No, this must be done before using PKanalix.

Settings (options)

  • How do I indicate the type of model/data? Extravascular versus intravascular is set in the Settings window. Single versus steady-state and infusion versus bolus are imputed based on the data set column-types.
  • Can I exclude data with insufficient data? The “Acceptance criteria” settings allow the user to define acceptance thresholds. In the output tables, each individual is flagged according those criteria. The flags can be used to filter the results outside PKanalix.
  • Can I set the data as being sparse? No, sparse data calculations are not supported.
  • Which options are available for the lambda_z calculation? Points to be included in the lambda_z calculation can be defined using the adjusted R2 criteria (called best fit in Winonlin), the R2 criteria, a time range or a number of points. In addition, the a specific range per individual can be defined, as well as points to include or exclude. Check the lambda_z options here and here.
  • Can I define several partial areas? No.
  • Can I save settings as template for future projects? Not for settings. However the user can set the data set headers that should be recognized automatically in “Settings > Preferences”.
  • Do you have a “slope selector”? Yes, check here for information on the “Check lambda_z“.
  • Can I define a therapeutic response range? No.
  • Can I set a user-defined weighting for the lambda_z? No. But most common options are available as a setting.
  • Can I disable the calculation of lambda_z (curve stripping in Winnonlin)? Set to general rule to a time interval that does not contain any point. For a single individual, select a single point. This will lead to a failure of the lambda_z calculation and parameters relying on lambda_z will not be calculated.

Output results

  • Can I change the name of the parameters? No. However the output files contain both PKanalix names and CDISC names.
  • Can I export the result table? The result tables are automatically saved as text files in the result folder. In addition, result tables can be copy-pasted to Excel or Word using the copy button on the top right of the tables.
  • Can I generate a report? No. But the result tables can be copy-pasted to Excel or Word using the copy button on the top right of the tables. Plots can be exported as images using in the menu “Export > Export plots” or by clicking on save button at the top of each plot.
  • Can I choose which parameters to show in the result table and to export to the result folder file? No. All parameters are exported. The user can filter the table outside PKanalix afterwards.
  • Are the calculation rules the same as in Winonlin? Yes.
  • Can I define myself the result folder? By default, the result folder corresponds to the project name. However, you can define it by yourself. See here to see how to define it on the user interface.

Results

  • What result files are generated by PKanalix?
  • Can I replot the plots using another plotting software? Yes, if you go to the menu Export and click on “Export charts data”, all the data needed to reproduce the plots are stored in text files.
  • When I open a project, my results are not loaded (message “Results have not been loaded due to an old inconsistent project”). Why? When loading a project, PKanalix checks that the project (i.e all the information saved in the .pkx file) being loaded and the project that has been used to generate the results are the same. If not, the error message is shown. In that case the results will not be loaded because they are inconsistent with the loaded project.

7.Case studies #

Different cases studies are presented here to show how to use Monolix and the MonolixSuite for Modeling and Simulation.

Warfarin case study



This video case study shows a simple PK modeling workflow in Monolix2018, with the example of warfarin. It explains the main features and algorithms of Monolix, that guide the iterative process of model building: from validating the structural model to adjusting the statistical model step-by-step.
It includes picking a model from the libraries, choosing initial estimates with the help of population predictions, estimating parameters and uncertainty, and diagnosing the model with interactive plots and statistical tests.

 

Tobramycin case study

This case study presents the modeling of the tobramycin pharmacokinetics, and the determination of a priori dosing regimens in patients with various degrees of renal function impairment. It takes advantage of the integrated use of Datxplore for data visualization, Mlxplore for model exploration, Monolix for parameter estimation and Simulx for simulations and best dosing regimen determination.
The case study is presented in 5 sequential parts, that we recommend to read in order: Part 1: Introduction, Part 2: Data visualization with Datxplore, Part 3: Model development with Monolix, Part 4: Model exploration with Mlxplore, and Part 5: Dosing regimen simulations with Simulx.


Remifentanil case study


Remifentanil is an opioid analgesic drug with a rapid onset and rapid recovery time. It is used for sedation as well as combined with other medications for use in general anesthesia. It is given in adults via continuous IV infusion, with doses that may be adjusted to age and weight of patients.
This case-study shows how to use Monolix to build a population pharmacokinetic model for remifentanil in order to determine the influence of subject covariates on the individual parameters.
Link to Remifentanil case study

 

Longitudinal Model-Based Meta-Analysis (MBMA) with Monolix Suite

Longitudinal model-based meta-analysis (MBMA) models can be implemented using the MonolixSuite. These models use study-level aggregate data from the literature and can usually be formulated as non-linear mixed-effects models in which the inter-arm variability and residual error are weighted by the number of individuals per arm. We exemplify the model development and analysis workflow of MBMA models in Monolix using a real data set for rheumatoid arthritis, following publication from Demin et al (2012). In the case study, the efficacy of a drug in development (Canakinumab) is compared to the efficacy of two drugs already on the market (Adalimumab and Abatacept). Simulations using Simulx were used for decision support to see if the new drug has a chance to be a better drug.
Link to MBMA case study

 

Analysis of time-to-event data

Within the MonolixSuite, the mlxtran language allows to describe and model time-to-event data using a parametric approach. This page provides an introduction on time-to-event data, the different ways to model this kind of data, and typical parametric models. A library of common TTE models is also provided.

Two modeling and simulation workflows illustrate this approach, using two TTE data sets:


Veralipride case study


Multiple peaking in plasma concentration-time curves is not uncommon, and can create difficulties in the determination of pharmacokinetic parameters.
For example, double peaks have been observed in plasma concentrations of veralipride after oral absorption. While multiple peaking can be explained by different physiological processes, in this case site-specific absorption has been suggested to be the major mechanism. In this webinar we explore this hypothesis by setting up a population PK modeling workflow with the MonolixSuite 2018.
The step-by-step workflow includes visualizing the data set to characterize the double peaks, setting up and estimating a double absorption model, assessing the uncertainty of the parameter estimates to avoid over-parameterization, and simulations of the model.

Link to veralipride case study

Help Guide Powered by Documentor
Suggest Edit
modal close image