Dec
11

FDA Validation of a PCR Test: Run Control Specification Part 5/n

The purpose of run control specification is to find what is the acceptable range of values to accept a sample. For example, if a Texas Red value is detected in cycle 38 (Ct 38), this is too late in the PCR cycles and the signal could be due to contamination. Thus the sample is thrown out.

To get the acceptable ranges, the PCR test is run on clinical positive controls (FFPE and fresh frozen). Because Texas Red serves as an internal control, we look at its values to decide whether we accept/reject a sample.

The objective of the Run Control Specification is to find a range of Texas Red values that will deem a sample to be acceptable.

The code can be found on github.

We want to find the acceptable Texas Red range for the clinical samples. We analyze the positive controls and get prediction intervals based on the observed values.

The results of the Texas Red channel at 0.9, 0.95 and 0.99 confidence and 0.9, 0.95 and 0.99 coverage looks like:

 FluorConfidenceCoverage Level2-sided Lower Prediction Interval2-sided Upper Prediction Interval
Positive ControlTexas Red0.990.929.9631.38
Positive ControlTexas Red0.990.9529.9631.38
Positive ControlTexas Red0.990.9929.9631.38
Positive ControlTexas Red0.950.929.9631.38
Positive ControlTexas Red0.950.9529.9631.38
Positive ControlTexas Red0.950.9929.9631.38
Positive ControlTexas Red0.90.930.0431.35
Positive ControlTexas Red0.90.9529.9631.38
Positive ControlTexas Red0.90.9929.9631.38

From these results we choose the broadest range (29.96-31.38) because we want to be liberal accepting samples. For this range, any confidence and coverage level will do except 0.9 confidence 0.9 coverage.

How to get the Prediction Intervals

We make a data frame of the positive controls only (not the negative controls):

positive_control = df[df$Content == 'Pos Ctrl-1' | df$Content == 'Pos Ctrl-2' | df$Content == 'Pos Ctrl-3' | df$Content == 'Pos Ctrl-4', ]

And then a data frame for each channel Texas Red, Cy5, and FAM:
pc_TexRed = positive_control[positive_control$Fluor == 'Texas Red',]
pc_Cy5 = positive_control[positive_control$Fluor == 'Cy5', ]
pc_Fam = positive_control[positive_control$Fluor == 'FAM', ]

In the next section, I’ll show the analysis on Texas Red but this can be applied to all channels.

  1. Outlier removal by the Tukey rules on quartiles +/- 1.5 IQR
  2. pc_TexRed_noOutliers <- outlierKD (pc_TexRed, Cq)

  3. Test if the data looks normal or not:
  4. shapiro.test (pc_TexRed_noOutliers$Cq)
    The results give us:

    Results of Hypothesis Test
    --------------------------

    Alternative Hypothesis:

    Test Name: Shapiro-Wilk normality test

    Data: pc_TexRed_noOutliers$Cq

    Test Statistic: W = 0.9494407

    P-value: 0.001008959

    which means the data is not normal.

  5. The tolerance package calculates prediction intervals.

    Load the library:


    library (tolerance)

    If the data looks normal, use the normtol function

    normtol.int (cleaned_df$Cq, alpha=alpha_num, P= coverage_level, side=2)

    If the data does not look normal, use a nonparametric tolerance function

    nptol.int (cleaned_df$Cq, alpha=alpha_num, P= coverage_level, side=2)

    Since the data is not normal, we'll use nptol.int. This loops through multiple confidence and coverage levels.

    confidence_levels <- c(0.99, 0.95, 0.90)
    nonparametric_coverage_levels <- c (0.90, 0.95, 0.99)

    for (confidence_level in confidence_levels) {
    alpha_num = 1 - confidence_level
    for (coverage_level in nonparametric_coverage_levels) {
    nonparametric <- ddply (all_controls_no_outliers, c('Fluor'), .fun = nptol_function, alpha=alpha_num, coverage_level=coverage_level )

    if (all (is.na (current_nonparametric))) {
    current_nonparametric <- nonparametric
    } else {
    current_nonparametric <- merge (current_nonparametric, nonparametric, all=TRUE)
    }
    } # end coverage_level
    } # end confidence_level

This code generates the results table at the top if this post.

Dec
04

FDA Validation of a PCR Test: Limit of Detection (LoD) Part 7/n

Limit of detection is the sensitivity of the assay — how low a concentration can the test detect?

To test this, the lab did a dilution series over a range of concentrations. When the concentration is low enough, the fusion won’t be detected.

Using the data from the dilution series, we used 4 models to predict the limit of detection.

  1. Linear regression using all data
  2. Linear regression using just the concentrations where data was observed
  3. Logit
  4. Probit

1. Linear regression using all data

My data frame “chan” contains deltaCq values and the Percentage_Fusion.

First do the linear regression:


chan.lm <- lm ( deltaCq ~ log2 ( Final_Percentage_Fusion ), data=chan) summary (chan.lm)

with a prediction interval:

PI <- predict (chan.lm, newdata = new.dat, interval='prediction', level=0.90)

The fusion dilution where the top prediction limit crosses the delta Ct cut-off will be estimated and chosen to be the model's estimate of the C95.

My delta Ct cutoff is 5 (determined from the Accuracy study).

cutoff <- 5

Our linear model on the upper prediction interval is :


upper.lm <- lm (PI[,"upr"] ~ log2(conc) )

y = slope * x + intercept and hence x = (y- intercept)/slope. In our case x is the concentration, and it's what we want. y is delta Ct and we want to know what concentration is predicted to have a delta Ct of 5 (our cutoff).

slope <- upper.lm$coefficients[2]
intercept <- upper.lm$coefficients[1]


C95 = (cutoff - intercept)/slope


conc_LoD = 2^C95

conc_LoD is our limit of detection.

2. Linear regression using just the concentrations where data was observed

To do a linear regression on just some of the data, identify which concentrations to remove.

remove_because_partial_or_no_data <- c(0.0488, 0.0977, 0.1953, 0.3906, 0.781, 1.563)

Remove these concentrations from the data frame:

chan <- chan[!(chan$Final_Percentage_Fusion %in% remove_because_partial_or_no_data),]

and then follow the same steps as in #1.

3. Logit model

First, make a column of whether the PCR product is called as a positive or not. This column is called "CalledPositive".


chan["CalledPositive"] <- 1

If no signal is detected or deltaCq is > 5, then the fusion is not called and we have to assign CalledPositive as false or "0".

chan[is.na(chan$deltaCq), "CalledPositive"] <- 0


chan[!is.na(chan$deltaCq) & chan$deltaCq > 5, "CalledPositive"] <- 0

Run the logit analysis

mylogit <- glm (CalledPositive ~ log2Conc , family = binomial(link = "logit"), data = chan)

To find the sensitivity at 95% based on the logit model, I manually feed in ranges of values until I get something close to 95%.

For example, by inspecting the graph I know that my LoD is somewhere between 0 and 2 so I set


xlower <- 0


xupper <- 2

and then I make a series of points:

temp.data <- data.frame(log2Conc = seq(from = xlower, to = xupper, length.out = 10))

And I input temp.data to get predicted sensitivities:


predicted.data.logit <- predict(mylogit, temp.data, type = "response", se.fit=TRUE)

I then look at predicted.data.logit to see which concentration gives me close to 95%. This can take several iterations until I'm very close to 0.95

4. Probit model

We make use of the "CalledPositive" column defined from the section 3 logit model.

myprobit <- glm(CalledPositive ~ log2Conc , family = binomial(link = "probit"), data = chan)

To find the concentration with sensitivity of 95%, do the same steps as described in 3.Logit model, but use predicted.data.probit instead.


predicted.data.probit <- predict(myprobit, temp.data, type = "response", se.fit=TRUE)

You can find my complete linear regression R scripts and probit and logit scripts on github.

Nov
27

FDA Validation of a PCR test: Analytical Specificity Part 3 / *

Analytical specificity shows how robust the test is to contamination. Positive controls with the fusion were spiked with EDTA or ethanol at various concentrations. We determined at which concentration the PCR no longer worked.

The figures and results in this blogpost were generated by R code corresponding to “AnalyticalSpecificity_R_scripts.txt” on github

Pictures always help, so these are some results:


EDTA affects FAM at higher concentrations.


EDTA affects Texas Red at higher concentrations.


EDTA doesn’t seem to affect ΔCt at higher concentrations.

From the pictures, it’s obvious EDTA affects FAM and Texas Red at higher concentrations. At what concentration does it start to significantly differ? (We’ll look at the p-values later in this post.)

Experimental Setup

Our PCR data contains in the Content column the groups “Unkn-01”, “Unkn-02″… “Unkn-10” where “Unkn-01” indicates the first concentration and “Unkn-10” indicates the 10th concentration tested.

For ethanol, the 10 concentrations that were tested were 4%, 2%, 1%, 0.5%, 0.25%, 0.125%, 0.063%, 0.031%, 0.016%, 0%.

For EDTA, the 10 concentrations that were tested were 20 mM, 10 mM, 5 mM, 2.5 mM, 1.25 mM, 0.625 mM, 0.313 mM, 0.156 mM, 0.078 mM and 0 mM.

Preliminary steps: Data is uploaded into R and then a whole bunch of cleaning and relabeling (check out the script in github).

We  summarize the statistics for the channels under different conditions

summary_all_data <- ddply (df, c('Fluor', 'Spikein_Level', 'Sample_no_number'), .fun=summary_function)

The results look like:

FluorSpikein_LevelSample_no_numberNumber of observationsNumber of missingMeanSDCV
FAM0Positive Control -EDTA8029.70.30
FAM0.078Positive Control -EDTA8029.50.20
FAM0.156Positive Control -EDTA8029.60.30
FAM0.313Positive Control -EDTA8029.70.30
FAM0.625Positive Control -EDTA8029.80.20
FAM1.25Positive Control -EDTA8029.90.30
FAM2.5Positive Control -EDTA80300.20
FAM5Positive Control -EDTA8030.10.20
FAM10Positive Control -EDTA8030.30.10
FAM20Positive Control -EDTA8032.50.90
HEX0Positive Control -EDTA8032.40.50
HEX0.078Positive Control -EDTA8032.10.50
HEX0.156Positive Control -EDTA8032.20.50
HEX0.313Positive Control -EDTA8032.20.40
HEX0.625Positive Control -EDTA8032.40.50
HEX1.25Positive Control -EDTA8032.40.30
HEX2.5Positive Control -EDTA8032.70.50
HEX5Positive Control -EDTA8032.30.50
HEX10Positive Control -EDTA8033.10.50
HEX20Positive Control -EDTA8034.62.10.1
Texas Red0Positive Control -EDTA8030.50.20
Texas Red0.078Positive Control -EDTA8030.70.30
Texas Red0.156Positive Control -EDTA8030.60.20
Texas Red0.313Positive Control -EDTA8030.80.30
Texas Red0.625Positive Control -EDTA8030.80.30
Texas Red1.25Positive Control -EDTA8030.70.30
Texas Red2.5Positive Control -EDTA8030.70.30
Texas Red5Positive Control -EDTA8030.90.30
Texas Red10Positive Control -EDTA8031.10.30
Texas Red20Positive Control -EDTA8033.61.10

We compare each concentration with the negative control (when concentration = 0)

df_control = cleaned_df[cleaned_df$Content == “Unkn-10”,] # this is the control
if (nrow (df_control) > 0) {
wilcox_results <- ddply (cleaned_df, c(‘Fluor’, ‘Spikein_Level’, ‘Sample_no_number’), .fun = mann_whitney_function, controls=df_control )

mann_whitney_results_filename = paste (folder, “Mann-Whitney_”, chemical, “.csv”)
}

The Mann Whitney results look something like:

FluorSpikein_LevelSample_no_numberWp-value
FAM0PC-EDTA321
FAM0.078PC-EDTA220.328205128
FAM0.156PC-EDTA321
FAM0.313PC-EDTA321
FAM0.625PC-EDTA420.328205128
FAM1.25PC-EDTA450.194871795
FAM2.5PC-EDTA540.020668221
FAM5PC-EDTA560.01041181
FAM10PC-EDTA640.0001554
FAM20PC-EDTA640.0001554
HEX0PC-EDTA321
HEX0.078PC-EDTA220.328205128
HEX0.156PC-EDTA270.645376845
HEX0.313PC-EDTA290.798445998
HEX0.625PC-EDTA360.720901321
HEX1.25PC-EDTA390.505361305
HEX2.5PC-EDTA470.13038073
HEX5PC-EDTA260.573737374
HEX10PC-EDTA550.014763015
HEX20PC-EDTA610.001087801
Texas Red0PC-EDTA321
Texas Red0.078PC-EDTA460.160528361
Texas Red0.156PC-EDTA400.441802642
Texas Red0.313PC-EDTA470.13038073
Texas Red0.625PC-EDTA500.064957265
Texas Red1.25PC-EDTA480.104895105
Texas Red2.5PC-EDTA450.194871795
Texas Red5PC-EDTA570.006993007
Texas Red10PC-EDTA610.001087801
Texas Red20PC-EDTA640.0001554

For example, FAM signals are significantly different from control starting at 2.5% (p=0.02).

We also look for consistent trends. At all concentrations > 2.5%, the signals are significantly different from control, which is what we expect.

Nov
27

FDA Validation of a PCR test: Pre-processing data Part 2 / n

All of the data analyzed used the same format to make it easy to re-use code .

The data for each PCR experiment was in an Excel file or a comma-delimited (.csv) file with the following format.

WellFluorTargetContentSampleBiological Set NameCqCq MeanCq Std. Dev
A01FAMFusionUnkn-1Fusion28.18682328.25895690.10657898
A01HEXInternal ControlUnkn-1Fusion30.8251246930.81717730.193826672
A01Texas RedGene AUnkn-1FusionNaN00

To be able to complete the FDA sections above, data from the different runs need to be combined together. So I manually added an additional column with a unique run name to every run, which allows me to combine the runs together and still distinguish between wells.

RunWellFluorTargetContentSampleBiological Set NameCqCq MeanCq Std. Dev
VP-xxxx-yyy_001A01FAMFusionUnkn-1Fusion28.18682328.25895690.10657898
VP-xxxx-yyy_001A01HEXInternal ControlUnkn-1Fusion30.8251246930.81717730.193826672
VP-xxxx-yyy_001A01Texas RedGene AUnkn-1FusionNaN00

Sometimes we have to exclude a few wells due to operator or technical errors.In a separate file that called “wells_to_exclude.csv”, I listed the Run and Well ID.

RunWellComments
VP-xxxx-yyy_001A08Operator error
VP-xxxx-yyy_002A12Forgot to add template

I can remove these wells with the following code.

For all of my analyses, the first 10 lines of my R code is the same: the data from the PCR runs are read into a data frame and then bad wells are excluded.

Here’s the code:


# libraries that I use a lot
library (tolerance)
library (plyr)
library (EnvStats)

# folder to print all my output
folder = “C:\\Users\\pauline\\Documents\\Fusion\\AnalyticalSpecificity\\”

# folder containing all of the runs combined for this particular experiment.
# See format in Table 2 above.
filename = “C:\\Users\\pauline\\Documents\\Fusion\\EDTA_EtOH_combined.csv”

# read in data
whole_df <- read.csv (filename, header=TRUE)

# make a unique id combining the Run ID and Well ID
whole_df[,”UniqueID”] = paste (whole_df[,”Run”],whole_df[,”Well”])

# remove more problematic wells
df_problem <- read.csv (“C:\\Users\\pauline\\Documents\\Fusion\\wells_to_exclude.csv”, header=TRUE)
problem_samples <- paste (df_problem[,”Run”], df_problem[,”Well”])
whole_df <- whole_df[!(whole_df$UniqueID %in% problem_samples), ]

whole_df is a data frame containing all the data, excluding the problematic wells.

To call a fusion, we use ΔCt. For the same well, we have to calculate ΔCt = FAM Ct – Texas Red Ct.

In all my scripts, I’ll create a new data frame called “channels”, where ΔCt is calculated by merging FAM and Texas Red into to the same row (based on run and well), and then doing the subtraction.


# get Texas Red values & other important stuff
df_TexasRed <- df[df$Fluor == "Texas Red", c("UniqueID", "Fluor", "Target", "Sample", "Spikein_Level", "Sample_no_number", "Content", "Cq")]


# get FAM values only
df_FAM <- df[df$Fluor == "FAM", c("UniqueID", "Fluor", "Cq")]


# merge into same row, based on UniqueID (run & well)
channels <- merge (df_TexasRed ,df_FAM, by="UniqueID")


# calculate deltaCq
channels[,"deltaCq"] <- channels["Cq.y"] - channels["Cq.x"]

If I'm lazy and want to use the same functions as the FAM and Texas Red channels which use the "Ct column", I might also create channels["Ct"], and assign the column ΔCt values.

Nov
27

FDA Validation of Companion Diagnostic (PCR test) – Part 1 / n

I’ve been helping get a companion diagnostic get approved as a FDA test. This blog series will describe the statistics (R) used for the FDA validation of the companion diagnostic.

The companion diagnostic test is a PCR test to check for a fusion / rearrangement on the customer’s DNA. Normally, people have gene A and gene B. If the patient’s DNA shows a fusion between gene A and gene B, then she should be administered a drug that is tailored to the A+B fusion.

The lab did the experiments and I performed the statistical analysis in R. This series of posts will describe the analysis.

Note: This isn’t production-level code, but it’ll do the job.

Experimental Setup

The PCR test measures FAM and Texas Red values. There are 42 PCR cycles, and so if a fluorescent signal is observed, the Ct signal must be below 42.

The FAM is the signal for the rearrangement — if it’s detected, then the Ct value (the PCR cycle that it’s detected in) should be less than 42. If the fusion is not present, then the signal is never detected and the FAM Ct value is NaN.

The Texas Red value is the signal for wild-type or normal gene A. It is the internal control — to make sure that DNA is in the test sample, we expect some normal DNA in every test because the fusion is heterozygous and a cancer sample may not be pure. We will figure out the reportable range for Texas Red in the Reportable Range Section .

The Cy5 channel measures the presence for wild-type or normal gene B. it is another internal control.

The method we used to decide whether the fusion is present is delta Ct = FAM-Texas Red.

(You could also use FAM signal only if you didn’t want an internal control of wild-type DNA, it depends on how your validation plan was written.)

FDA validation is a long process that takes a few hundred experiments to examine:
Part 2. Pre-processing
Part 3. Analytical Specificity
Part 4. Accuracy
Part 5. Run Control Specification
Part 6. Reportable Range
Part 7. Limit of Detection (LoD)
Part 8. Precision (repeatability & reproducibility)
Part 9. Checking Controls

Sep
01

Creating Your Own Personal Silent Retreat

I was seeking silence and solitude so I spent a few days at Bali Silent Retreat. It was an ascetic vegetarian lifestyle (which was a bit much for me). While I had my moments of revelation, I realized I didn’t have to fly to Indonesia for silence. This blog post will describe how to create your own silent retreat.

The Bali Silent Retreat is not actually silent. Teachers instruct yoga and meditation, guides talk on tours, and you are asked to chant during ceremonies.

The silent part is removing the noise in your life. For most of us, this is our electronic devices including Internet, and social media. We spend a lot of time on our email accounts, Whatsapp, Telegram, Facebook, Instagram, and Twitter, etc… Noise can also come from family/roommates if you don’t live alone.

The first thing we did at the Silent Retreat is to lock up our electronic devices. There are no electrical outlets in the rooms and there’s no wi-fi signal anywhere.

That’s it! It’s that simple. The secret sauce to a silent retreat:

1) Locking up your electronic devices
2) Avoid excessive talking and stress
3) Spending time alone with your thoughts

If you can do this for a day or two (or even half a day), you’ve created your own silent retreat.

You might be asking: If I can’t go on the Internet or talk to people, what do I do?
The silent retreat had a lot of optional yoga and meditation classes to fill up the day.

Bali Silent Retreat’s Schedule
6-6:45 am Meditation
7-8:30 am Gentle Yoga
8:30-10 am Breakfast
12-2 pm Lunch
2-3:15 pm Yin yoga
3:30-4:30 pm Meditation
4:30 pm-6 pm Dinner

But these classes were optional. You don’t have to meditate or practice yoga — you can do whatever activities that are enjoying and calming to you. Some activities could be:

  • hiking
  • cooking
  • yoga
  • sewing
  • gardening
  • massage
  • taking a bath
  • reading inspirational books

Fill up the day with activities that are familiar, calming, and personally enjoyable, so you can let your mind meander and wander.

Calm your mind first; stillness and insights will come later.

If you feel antsy, write it down for later (using pen and paper), then let it go. This is your time, for you.

You’ll also eat alone in silence and without distraction (no devices nor reading). This helps maintain the silence.

Here’s the reality.

Silence at home
I admit silence is hard to achieve at home because my instinct is to check the Internet every 10 minutes. However, I have still been able to find a 10-hour period of time of silence. I do this when my roommate is travelling so the house is quiet. I work a half-day in the morning so I can check all my emails and get everything out of the way. Because everyone at work knows I’m taking the afternoon off, they won’t expect me to reply until the following day. When my period of silence starts, I make sure nothing is hanging over my head and shut off all devices. And then I fill the rest of the day with activities that I enjoy (see list above)

Silence when travelling
It’s much easier for me to attain ‘enlightenment’ when I travel alone. When I travel, I only use the hotel’s Internet, so I’m unplugged. Importantly, people don’t expect me to respond when I’m travelling. I’ve attain silence similar to what I had in Bali Silent Retreat in a museum in Washington, D.C., soaking in Calistoga baths, or wandering Central Park in New York.

I hope this helps you find your inner self.

Jun
12

Fantasy and Genetics – Do They Go Together Like Science and Fiction?

This could be a disaster. I’m launching a new website that does genetic analyses for fun. It combines make-believe and science.

There are a lot of direct-to-consumer genetics products out there, yet genetics is not very predictive at the individual level. The science behind these difficulties is abtruse, so why not have some fun while helping people understand their DNA better?

In between my various consulting jobs, I caught up on TV, movies and books. Specifically, I got hooked on Game of Thrones.

Game of Thrones has a lot of drama, but it also has genetics! It’s an important plot point – one of the heroes Ned Stark ends up dying because he understands genetics!

Ned Stark recognizes that the black-haired King Baratheon can not be the father of Queen Cersei’s golden-haired children, which means her children should not inherit the throne.

The seed is strong… All those bastards [from King Robert], all with hair as black as night. … No matter how far back Ned searched in the brittle yellowed pages, always he found the gold yielding before the coal.

Ned confronts Queen Cersei with his genetic discovery; which leads to Ned’s downfall and eventual execution.

OMG!

After getting over the shock of Ned’s death, I went meta and realized that genetics and fantasy don’t have to be completely separate — they can be mixed.

Our favorite characters have interesting personalities and traits, and a lot of these characteristics have been researched. Based on this research we can create a genetic sketch of what our favorite heroes might look like at the DNA level.

My app then takes your DNA (from direct-to-consumer DNA companies like 23andMe), and predicts which traits you might have (like loyalty, brave, strength endurance). Finally, the app shows what traits you could have in common with your favorite fictional characters.

For example, the T allele in rs6265 in the BDNF gene is associated with resilience. So if your DNA has the genotype TT for rs6265 , we’d predict you were also resilient, and would likely share that trait with Ned Stark.

I admit this is one of my stranger ideas, but there is a long tradition of combining science with fiction; why not add genetics to the mix? Since I don’t have a full-time job, I figured it was a perfect time to try new things and a perfect time to fail.  I realize this may not be for everybody — but if you want to see what you have in common with your favorite fictional characters, please try out my DNA Fiction website.

Apr
02

Obsessions in Alzheimer’s

This video shows a woman with Alzheimer’s who is obsessed with finding and helping a non-existent cat. The obsession happens over a two-month period and you can tell she’s very distressed. She is always looking for a cat — wandering the streets and even breaking windows to go out and find this cat. Her daughter (Mulligrubs1) is very patient and reassures often that the five cats they own are safe.

OK, I’ll throw this out there —  I wonder if Mulligrubs1’s mother was using a cat as a metaphor for her mind. Why? Because it resembles what Martin Slevin, another Alzheimer’s carer, described in his own  book. His mother talked to and obsessed over an imaginary girl in the radiator. Towards the end of the book, he realizes that his mother is the little girl in the radiator, trapped, alone, and scared. Once he makes this connection and talks to his mother’s obsession goes away.

 

In the video, Mulligrubs1’s mother describes a cat:

  • The cat is often lost and outside. Maybe she feels her mind is lost and wandering?
  • The cat is described as wet. Cats don’t like being wet, so this means the cat is unhappy and uncomfortable.
  • Mulligrubs1′ mother always wants to lock the cat in her room. By bringing the cat/her mind in her room, maybe she would feel safer, and her mind wouldn’t wander.

In their conversations, Mulligrubs1 is constantly reassuring the mother about the 5 real cats, but Mulligrubs1’s mother constantly refers to a cat that isn’t there. Maybe some Alzheimer’s patients create a metaphor for how they feel and if we can enter their reality, we can relate to how they feel. And maybe, just maybe, knowing that someone else understands can help our loved ones?

I imagine a possible conversation between Mulligrubs1 and her mother:

Mulligrubs1: Please sit down. Tell me about the cat. I would like to help you.

Mulligrubs1’s mother: <Describes the cat as small, lost, outside, and wanting to come back in.>

Mulligrubs1: That sounds scary. The cat must be very scared. <hugs her mother> Do you feel like that cat?

I wonder what would have been her mother’s response. If it would have helped her to know that someone understood how she felt. Would she remember this or forget and return to her cat obsession? (For Martin, once he made the connection and told his mother he understood, his mother’s obsession went away.)

Of course, it’s easy to make this suggestion from my armchair. Mulligrubs1’s patience and love is admirable (watch the video to see I mean); I hope I can do the same.

Disclaimer: These are just my ponderings…

Dec
27

Can’t lose weight with Belviq?

With the holidays nearly over, I have to admit I’m feeling sick overeating with the holiday parties and goodies given as gifts.

We’ll be examining the gender differences in the weight loss drug Belviq (generic name lorcaserin).

Based on online ratings, Belviq works better in males than in females (men gave 1.18 more points for effectiveness than women (p=0.000064) and 1.17 more for satisfaction (p=0.000086)).

In 43% of the reviews written by men, they said Belviq (Lorcaserin) was “reducing cravings”. In contrast, only 9% of the reviews written by women said Lorcaserin “reduced cravings”. In 34% of women in their reviews said they experienced “headaches” while taking the drug, whereas only 3% of men said they experienced “headaches”. Meanwhile, 87% of men in their reviews said the Belviq assist for “weight lost” as compared to 61.3% of women. So, we can conclude from the reviews that Belviq (Lorcaserin) works better in men than in women.

Here is a comparison of the words found in reviews from men and women:

Phrase Female* Male** p-value
“reduced cravings” 9% (4/44) 43% (13/38) 0.0006
“weight lost” 61% (27/44) 87% (26/30) 0.02
“headache” 34% (15/44) 3% (1/30) 0.002

* % of female reviewers that use the phrase
** % of male reviewers that use the phrase

So, if you’re a woman not seeing great results with Belviq — you’re not the only one! Based on experiences of WebMD reviews, Belviq isn’t as effective for women.

Dec
24

Treating depression: Wellbutrin is more effective for females than males.

Millions suffer from depression, so it’s important to know which treatments work.

Women are underrepresented in clinical trials. Thus, it is possible that they react differently to drugs. To study this, we downloaded online drug reviews to see if women responded to drugs differently than men.

Surprisingly, women rated Wellbutrin XL higher than males. Wellbutrin is typically prescribed for depression. Women gave a +1.16 higher score for effectiveness (p=0.004) and +0.83 higher for satisfaction (p=0.045) compared to men.

In the reviews, women write that Wellbutrin XL is “effective”. Over half of the reviews written by women described Wellbutrin as “effective.” In contrast, only 1 out of 5 men described Wellbutrin as “effective.” Meanwhile, 44% of men in their reviews said Wellbutrin was “not effective.” So we can conclude from the reviews that Wellbutrin works better in women than in men.

Here is a detailed comparison of the word frequencies “effective” and “not effective” in men and women:

Phrase Female* Male** p-value
“effective” 54% (13/24) 19% (3/16) 0.025
“not effective” 17% (4/24) 44% (7/16) 0.06

* % of female reviewers that use the phrase
** % of male reviewers that use the phrase

Thus, we can conclude that Wellbutrin is more effective in females than in males.

— Herty Liany and Pauline Ng

Older posts «