A 10-Step "Best Practices" Guide for Competitive Intelligence Gathering at Medical Conferences

by Murt Abuwala | Apr 10, 2019

Pharmaceutical Competitive Intelligence

Introduction: A 10-Step “Best Practices” Guide for Competitive Intelligence Gathering at Medical Conferences


Medical conferences are an important component of pharmaceutical competitive intelligence efforts.  Major conferences attract more than 20,000 attendees each year and have 3,000 or more scientific abstracts and sessions that are presented in 3 to 4 days. These events provide an amazing platform for companies to present and learn about new clinical data, interact with leading scientists and clinicians, learn about emerging science, and much more.  A well organized company can assess hundreds of scientific studies, conduct hundreds of meetings with physicians and scientists, and gain valuable insights regarding clinical studies and practice during this short period of time. In fact, it can take weeks to process all of the data and insights that are gathered during these meetings. 

For a company, there are several major activities that take place during conferences, each of which provides unique opportunities to gather intelligence and formulate insights.

  • Attending Talks and Poster Sessions to learn about new research

  • Commercial and Medical Booths where team members have an opportunity to interact with attendees to discuss products and pipeline.

  • Interacting with Key Stakeholders such as physicians and scientists to solicit their insights on clinical programs and practices.

The focus of this post is on the scientific talks, sessions, and posters (i.e., data coverage).

Preparing to cover data at large conferences is daunting. There are several thousand sessions and abstracts that need to be reviewed, prioritized, assigned, and eventually covered and summarized.  If you don’t have a good plan going in, it’s easy to get lost in the hustle and bustle and miss things. It’s happened to me more times than I’d like to admit. But, over the past few years, our team has learned a lot and have gotten pretty good at identifying and covering content to support competitive intelligence (CI) efforts for the world’s largest companies at the largest conferences.  During this period of time, we also noticed that there are wide discrepancies in the CI processes teams use. And in turn, this results in missed opportunities for scientific insights and KOL interactions. This post summarizes the practices our team implements to generate consistent results in identifying and organizing scientific content and supporting world-class CI for our customers. If you have something to add/refine or just have a question – please let us know – would love any feedback.




Central to implementing CI processes at medical conferences is scope definition. This is essentially a set of scientific phrases that reflect the strategic and CI priorities of the company and team.  This includes the competitive products, pipeline assets that could be competitors in the future, the companies that are developing those assets, KOLs speaking and/or presenting posters, emerging science presentations and the sentiments around them.  We have found that a good way to organize the thinking and terms is to place them into the following 15 categories.


  • Disease Area / Indications

  • Signaling Pathways

  • Enzyme or Receptor Targets / Mechanism of Action (MOA) / Mechanism of Disease (MOD).

  • Marketed Assets / Therapies (includes therapeutic combinations)

  • Pipeline Competitors / Therapies

  • Manufacturers (of Marketed and Pipeline assets)

  • Treatment Modalities

  • Clinical Trial Programs / NCTs

  • Patient Populations / Special Patient Populations

  • Efficacy Measures

  • Adverse Events

  • Comorbidities

  • HeOR Measures.

  • Other / Misc Terms

  • KOLs/Authors

Note, that it is the ENTIRE combination of these keywords that get applied (like a very complicated VENN diagram). So, within any given category you can and will have general terms that may not apply specifically to your disease area of interest. But, when all of the search terms are applied together, it will filter the results down.




Often there is too much matching content to reasonably cover and we need a way to prioritize the matching results. Scoring content is a good way to provide some initial filters and narrow down the content — it’s certainly a lot faster than reading through hundreds of abstracts! We like to use a weighted index normalized on a 100 point scale for the categories that can vary by team and conference.  This is largely because the importance of the categories usually vary based on the stage of an asset and the dynamics of the environment. For instance, a Phase 2 asset may place higher weight on disease area and signalling pathways. A mature marketed asset may value AEs, HEOR, and Special Populations higher. So, allowing for this differential is helpful in pinpointing content that is likely to be most relevant. Additionally, within each category we use a 100 point normalized scale to assign to the keywords. This allows further fine-tuning on which keywords are most important in identifying relevant content.  When coupled with a tool that dynamically applies the weighted index scoring table to conference content, this technique can be used to iterate over results until there is good alignment between the scoring and relevancy of the content. Our team takes this technique one step further by coupling it to our machine learning techniques. This enables us to automatically improve the scoring over time so that our systems get better and better at identifying the most relevant content.




In addition to setting up the keywords, there may be logical combinations (AND, OR, NOR, etc.) that are helpful. For instance if you are looking for Product  X AND Product Y in the same abstract. While we find this to be helpful, it can require a lot of time and concentration. Generally, we translate the logical operators into our scoring algorithm which makes it easier to manage.




The real “fun” begins when you need to apply these search terms to conference content. The most basic approach is to simply copy and paste the terms you have identified into the search box of a conference website, and then copy and paste the results into a spreadsheet. It’s incredibly tedious, but it’s the way most teams are doing it and it gets the job done. But, there are two notable challenges with this approach. First, the searches will need to be done several times because conferences are typically updating content several times in the weeks leading up to start of the conference. And second, scoring the content in the spreadsheet is challenging unless you have code to help. If you find this process tedious or onerous – there is help.  For instance, there are companies that will provide the conference content in a nicely formatted spreadsheet. This alleviates the labor intensive searching and copying and pasting into MS Excel and allows teams to more quickly and easily review and identify relevant content. There are also companies that take it a step further and provide the conference catalog in a real-time online platform that reduces the need to use spreadsheets for this process.




Prioritizing content is often a collaborative process in which several individuals are involved in reviewing abstracts and deciding on priorities. This can be an unwieldy process. We recommend a few things to make it easier to manage. If you are using MS Excel, we suggest tracking individual priority designations by using separate priority columns for each person that needs to review the content. We suggest using simple drop downs (e.g., No, Not Sure, Low, Medium, and High) to collect each person’s priority designation. When there is a lot of content and/or people involved, this can be taken one step further by adding scoring to the priority selections. For example, each drop-down value can be assigned a score (e.g., 0, 1, 5, 10). This allows for the ratings to be translated into a total score for the abstract. Then, based on the score distribution across all of the abstracts, a threshold/cut-off value can be identified to help in generating a final prioritized list.

Generally, we have found the most efficient way for teams to get through content review and prioritization is by grouping abstracts and setting up a ‘buddy’ system. First, group the abstracts (e.g., topic, disease area, etc.). Then, assign 2 people to review each abstract group. The first person will review the content and recommend priorities and then hand it off to their ‘buddy’ who will also review the content. If they have a difference in opinion on the priority, they discuss and modify as necessary. Once they are done, they submit their recommendations.  There are a lot of different ways to do this and we have seen many. But, the buddy system has been, by far, the most efficient and organized way we have encountered.




Once the content is prioritized, the next step is to assign team members to the content. The trick here is to do so in a work-load balanced manner — each person should ideally have the same amount of work each day. Seems easy enough – but this can get tricky when there are competing events. We have found the easiest and best way to do this while also accommodating inevitable changes is to populate the conference catalog with booth coverage, HCP meetings, internal events, and any thing else that team members are participating in. Once you have a full understanding of the schedule, it’s easier to assign items to team members without creating any scheduling conflicts. If you have more than 20 people responsible for collecting insights, it’s often easier to organize them into teams of 4 or 5 people with a team lead. We usually see teams organized topically (e.g., MOA, development stage, etc.). However, when deep expertise is not needed, we recommend organizing teams by something else, like favorite sport, or favorite food. In other words, each team will have a mix of content. We recommend this because it is much easier to generate even work distribution this way and accommodate changes in schedules. When teams are used, content should first be distributed across teams and then distributed evenly among team members within each team.




With content prioritized and evenly assigned — the team is finally ready to attend talks, poster sessions, and more. In order to maximize the downstream utility of the insights from the team, we have found that creating a highly structured format to collect insights is important. At the very least, we suggest creating several questions such as: Objectives, Design, Results, Conclusions, Key Takeaway, and Strategic Relevance. In addition, we suggest strict word/character limits on each question to ensure that input is distilled appropriately. The character limits will also make the scientific debrief process of generating debriefs much easier. The other thing to note here is the ‘Strategic Relevance’ question. In this case, we suggest having a simple ‘Low’, ‘Medium’, ‘High’ drop-down that reflects the team members view of the priority of this abstract/session. This piece of input helps refine the prioritization process. Finally, the team should be encouraged (if not mandated) to provide their notes/insights on a daily basis at a conference. With so much going on, it’s easy to forget important details. We are often asked about images/photos of posters and session slides. Based on our experience working with various teams — images can be helpful in generating the insights. So, when the conference allows for it – it’s a good idea to take photos of the content that is being covered. However, we have found that its better to keep images out of final scientific debriefs and executive summaries unless there is professional support to help manage image formatting and layout.



Now that your team is collecting and sharing insights, it’s time to pull them together, figure out what is most important, and present them to the broader team. Generally, it’s best to do this daily while at a conference. This way your team is getting briefed as things come-up and are equipped to answer questions if and when they do come up — data-releases, compelling science, etc. At the large conferences, where there may be a lot of relevant data presented — you may not have time to present all of the sessions/abstracts that were covered. In this case, the “Strategic Relevance” field that we mentioned earlier comes in handy (e.g., it can narrow down insights based on ‘High’ relevance). In order to run these scientific debriefs, the insights from the team members will likely need to be in MS PowerPoint format. We’ve found that the way to get the most consistent results is to have a single person (or very small group of people) collect the insights from the team members and then create the slides. There are many ways to present scientific content — but, we have found that focusing on a 1 to 3 sentence ‘Key Takeaway’ and presenting each slide in less than 5 minutes is both adequate and appreciated. A couple of examples of debrief slides that represent good practices are here.




After distilling the key insights from a conference – the content can be shared with key internal and external stakeholders. A good practice is to create an “Executive Summary” which can be shared with senior medical and commercial leadership that highlights the key scientific themes, major takeaways, and important competitive learnings which can impact scientific and clinical strategy. A second good practice is to package the key scientific themes and major takeaways into collateral that can be shared with HCPs during field visits by medical science and commercial field personnel.




We say that process itself is a process because there is always room for improvement. It’s valuable to collect feedback from team members on what went well and what could be improved. Using simple survey tools like Google Forms or SurveyMonkey are usually adequate for this purpose and you may only need 4 or 5 questions — here is an example of what we typically use. From an analytical perspective, it’s valuable to know basic statistics such as average content actually covered by person to understand if workload assumptions should be refined (here is an example of some of the metrics we look at that end of a conference). And from a scientific content identification and scoring perspective – its valuable to understand the correlation between what was prioritized versus how the content was actually rated by team members. This can be used to fine-tune the weighting and scoring techniques that were used so that content prioritization can improve over time.




Preparing for medical conferences takes considerable planning and there is no doubt that it is a lot of work. But, with a little organization and structure, conference planning and preparation time can be reduced, processes can be standardized, and the value extracted from medical conferences can be increased. The 10-step process outlined in this post helps companies and teams improve efficiency, save time, and maximize the value gained from conferences.



Please feel free to share your thoughts, reactions, and/or questions.