A 10-Step "Best Practices" Guide for Competitive Intelligence Gathering at Medical Conferences

by Murt Abuwala | Apr 10, 2019

Pharmaceutical Competitive Intelligence

Introduction: A 10-Step “Best Practices” Guide for Competitive Intelligence Gathering at Medical Conferences


Medical conferences are an important component of pharmaceutical competitive intelligence efforts.  Major conferences attract more than 20,000 attendees each year and have 3,000 or more scientific abstracts andsessions that are presented in 3 to 4 days. These events provide an amazing platform for companies to present and learn about new clinical data, interact with leading scientists and clinicians, learn about emerging science, and much more.  A well organized company can assess hundreds of scientific studies, conduct hundreds of meetings with physicians and scientists, and gain valuable insights regarding clinical studies and practice during this short period of time. In fact, it can take weeks to process all of the data and insights that are gathered during these meetings. 

For a company, there are several major activities that take place during conferences, each of which provides unique opportunities to gather intelligence and formulate insights.

  • Attending Talks and Poster Sessions to learn about new research

  • Commercial and Medical Booths where team members have an opportunity to interact with attendees to discuss products and pipeline.

  • Interacting with Key Stakeholders such as physicians and scientists to solicit their insights on clinical programs and practices.

The focus of this post is on the scientific talks, sessions, and posters (i.e., data coverage). We will cover the HCP interactions and Booth Coverage in other posts.

Preparing to cover data at large conferences is daunting. There are several thousand sessions and abstracts that need to be reviewed, prioritized, assigned, and eventually covered and summarized.  If you don’t have a good plan going in, it’s easy to get lost in the hustle and bustle and miss things. It’s happened to me more times than I’d like to admit. But, over the past few years, our team has learned a lot and have gotten pretty good at identifying and covering content to support competitive intelligence (CI) efforts for the world’s largest companies at the largest conferences.  During this period of time, we also noticed that there are wide discrepancies in the CI processes teams use. And in turn, this results in missed opportunities for scientific insights and KOL interactions. This post is the first in an upcoming series to summarize and share the practices our team implements that generate consistent results in identifying and organizing scientific content and supporting world-class CI for our customers. If you have something to add/refine or just have a question – please leave a comment – would love any feedback.




Central to implementing CI processes at medical conferences is scope definition. This is essentially a set of scientific keywords that reflect the strategic and CI priorities of the company and team.  This includes the competitive products, pipeline assets that could be competitors in the future, the companies that are developing those assets, KOLs speaking and/or presenting posters, emerging science presentations and the sentiments around them.  We have found that a good way to organize the thinking and terms is to place them into the following 15 categories.


  • Disease Area / Indications

  • Signaling Pathways

  • Enzyme or Receptor Targets / Mechanism of Action (MOA) / Mechanism of Disease (MOD).

  • Marketed Assets / Therapies (includes therapeutic combinations)

  • Pipeline Competitors / Therapies

  • Manufacturers (of Marketed and Pipeline assets)

  • Treatment Modalities

  • Clinical Trial Programs / NCTs

  • Patient Populations / Special Patient Populations

  • Efficacy Measures

  • Adverse Events

  • Comorbidities

  • HeOR Measures.

  • Other / Misc Terms

  • KOLs/Authors

Note, that it is the ENTIRE combination of these keywords that get applied (like a very complicated VENN diagram). So, within any given category you can and will have general terms that may not apply specifically to your disease area of interest. But, when all of the search terms are applied together, it will filter the results down. In a future post, we’ll go into more depth on setting up search terms. 




Often there is too much matching content to reasonably cover and we need a way to prioritize the matching results. Scoring content is a good way to provide some initial filters and narrow down the content — it’s certainly a lot faster than reading through hundreds of abstracts! We like to use a weighted index normalized on a 100 point scale for the categories that can vary by team and conference.  This is largely because the importance of the categories usually vary based on the stage of an asset and the dynamics of the environment. For instance, a Phase 2 asset may place higher weight on disease area and signalling pathways. A mature marketed asset may value AEs, HEOR, and Special Populations higher. So, allowing for this differential is helpful in pinpointing content that is likely to be most relevant. Additionally, within each category we use a 100 point normalized scale to assign to the keywords. This allows further fine-tuning on which keywords are most important in identifying relevant content.  When coupled with a tool that dynamically applies the weighted index scoring table to conference content, this technique can be used to iterate over results until there is good alignment between the scoring and relevancy of the content. Our team takes this technique one step further by coupling it to our machine learning techniques. This enables us to automatically improve the scoring over time so that our systems get better and better at identifying the most relevant content.




In addition to setting up the keywords, there may be logical combinations (AND, OR, NOR, etc.) that are helpful. For instance if you are looking for Product  X AND Product Y in the same abstract. While we find this to be helpful, it can require a lot of time and concentration. So we generally only implement it for customers with larger portfolios that have several teams attending the same conference and/or are working in areas where there is a lot of scientific activity.




The real “fun” begins when you need to apply these search terms to conference content. The most basic approach is to simply copy and paste the terms you have identified into the search box of a conference website, and then copy and paste the results into a spreadsheet. It’s incredibly tedious, but it’s the way most teams are doing it and it gets the job done. But, two notable challenges with this approach are first, the searches will need to be done several times because conferences are typically updating content several times in the weeks leading up to start of the conference. And second, scoring the content in the spreadsheet is challenging unless you have code to help. If you find this process tedious or onerous – there is help.  For instance, there are companies (like Orbytel Group) that will provide the conference content in a nicely formatted spreadsheet. This alleviates the labor intensive searching and copying and pasting into MS Excel and allows teams to more quickly and easily review and identify relevant content. There are also companies (like Orbytel Group) that take it a step further and provide the conference catalog in a real-time online platform that provides not only powerful searching but additional features that can be used to prioritize and assign content, as well as to collect structured insights and generate scientific debriefs (all topics for future posts).




Prioritizing content that will actually get covered by team members is often a collaborative process in which several individuals are involved in reviewing abstracts and deciding on priorities. This can be an unwieldy process. We recommend a few things to make it easier to manage. If you are using MS Excel, we suggest tracking individual priority designations by using separate priority columns for each person that needs to review the content. If you are using an application like harmony Insights, then our application will automatically keep track of each individual’s priority designations. We suggest using simple drop downs (e.g., Zero, Low, Medium, and High) to collect each person’s priority designation. Each drop-down value should be assigned a score (e.g., 0, 1, 5, 10). When the information is compiled from the various reviewers, you can simply sum the score of their ratings and generate a total score per abstract. Based on the score distribution, it is then simple to create a threshold/cut-off value in order to finalize prioritized content. 




Once the content is prioritized, the next step is to assign team members to the content. The trick here is to do so in a work-load balanced manner — each person should ideally have the same amount of work each day. Seems easy enough – but this can get tricky when there are competing events. We have found the easiest and best way to do this while also accommodating inevitable changes is to populate the conference catalog with booth coverage, HCP meetings, internal events, and any thing else that team members are participating in. Once you have a full understanding of the schedule, it’s easier to assign items to team members without creating any scheduling conflicts. If you have more than 20 people responsible for collecting insights, it’s often easier to organize them into teams of 4 or 5 people with a team lead. We usually see teams organized topically (e.g., by disease area, MOA, etc.)  — though there is some variance because content is usually not evenly distributed by topic. When teams are used, content should first be distributed across teams and then distributed evenly among team members within each team. This normalizes the distribution when there are content volume differences between teams.




With content prioritized and evenly assigned — the team is finally ready to attend talks, poster sessions, and more. In order to maximize the downstream utility of the insights from the team, we have found that creating a highly structured format to collect insights is important. At the very least, we suggest creating several questions such as: Objectives, Design, Results, Conclusions, Key Takeaway, and Strategic Relevance. In addition, we suggest strict word/character limits on each question to ensure that input is distilled appropriately. The character limits will also make the scientific debrief process of generating debriefs much easier. The other thing to note here is the ‘Strategic Relevance’ question. In this case, we suggest having a simple ‘Low’, ‘Medium’, ‘High’ drop-down that reflects the team members view of the priority of this abstract/session. This piece of input will help us later to refine the scoring and learning algorithm. Finally, the team should be encouraged (if not mandated) to provide their notes/insights on a daily basis at a conference. With so much going on, it’s easy to forget important details, nevermind notebooks, laptops, and phones getting lost during travel. So it’s a good practice to collect the insights daily. We are often asked about images/photos of posters and session slides. Based on our experience working with various teams — images can be helpful in generating the insights. So, when the conference allows for it – it’s a good idea to take photos of the content that is being covered.



Now that your team is collecting and sharing insights, it’s time to pull them together, figure out what is most important, and present them to the broader team. Generally, it’s best to do this daily while at a conference. This way your team is getting briefed as things come-up and are equipped to answer questions if and when they do come up — data-releases, compelling science, etc. At the large conferences, where there may be a lot of relevant data presented — you may not have time to present all of the sessions/abstracts that were covered. In this case, the “Strategic Relevance” field that we mentioned earlier comes in handy (e.g., it can narrow down insights based on ‘High’ relevance). In order to run these scientific debriefs, the insights from the team members will likely need to be in MS PowerPoint format. We’ve found that the way to get the most consistent results is to have a single person (or very small group of people) collect the insights from the team members and then create the slides. Alternatively, applications like harmony Insights will automatically create these PowerPoint slides which ensures timely and consistent results. There are many ways to present scientific content — but, we have found that focusing on a 1 to 3 sentence ‘Key Takeaway’ and presenting each slide in less than 5 minutes is both adequate and appreciated. We provide a number of debrief templates built into our harmony Insights application that represent “best practices” — you can see some examples here.




After distilling the key insights from a conference – the content can be shared with key internal and external stakeholders. A good practice is to create an “Executive Summary” which can be shared with senior medical and commercial leadership that highlights the key scientific themes, major takeaways, and important competitive learnings which can impact scientific and clinical strategy. A second good practice is to package the key scientific themes and major takeaways into collateral that can be shared with HCPs during field visits by medical science and commercial field personnel.




We say that process itself is a process because there is always room for improvement. It’s valuable to collect feedback from team members on what went well and what could be improved. Using simple survey tools like Google Forms or SurveyMonkey are usually adequate for this purpose and you may only need 4 or 5 questions — here is an example of what we typically use. From an analytical perspective, it’s valuable to know basic statistics such as average content actually covered by person to understand if workload assumptions should be refined (here is an example of some of the metrics we look at that end of a conference). And from a scientific content identification and scoring perspective – its valuable to understand the correlation between what was prioritized versus how the content was actually rated by team members. This can be used to fine-tune the weighting and scoring techniques that were used so that content prioritization can improve over time.




Preparing for medical conferences takes considerable planning and there is no doubt that it is a lot of work. But, with a little organization and structure, conference planning and preparation time can be reduced, processes can be standardized, and the value extracted from medical conferences can be increased. The 10-step process outlined in this post helps companies and teams improve efficiency, save time, and maximize the value gained from conferences.




Please follow-us on LinkedIn to be notified and receive updates as we add to this article series – an article for each of the 10 steps above will be posted in the coming weeks. 

Please feel free to share your thoughts, reactions, and/or questions. Any other topics that you’d like us to write about?