survey on the use of artificial intelligence methods in

21
1 Survey on the use of Artificial Intelligence methods in the evidence management phase of the risk assessment process. Fields marked with * are mandatory. Artificial Intelligence approaches in the evidence management phase The  (EFSA) is the agency of the that provides European Food Safety Authority European Union independent scientific advice and communicates on existing and emerging risks associated with the food chain. As part of its mandate, EFSA receives requests for scientific advice mainly from the European , but also from the . Commission European Parliament or Member States In order to improve the delivery of this mandate, a key priority area for the organisation is to investigate the potential for the application of approaches in the Artificial Intelligence (AI) evidence management , where the term “evidence management” refers to activities around the process of the risk assessment collection, appraisal, assessment and publication of data gathered through data analysis, systematic literature review, and expert consultation. The goal is to deliver a roadmap for action providing recommendations for future multiannual, multi-partner studies or projects in the area of AI approaches, building on EFSA's vision and supporting EFSA's preparedness for future risk assessment requirements and prevent possible divergence on sensitive matters. The key objective of this survey - promoted by PwC - is to investigate AI applications for in developing a harmonised evidence management in order to provide useful support to EFSA approach on the implementation of AI methods in these areas by mapping existing and planned developments, i , dentifying and prioritizing available tools and research activities identifying , and i challenges and technical/non-technical barriers dentifying and ranking opportunities for in this area while promoting communication and knowledge transfer collaboration with potential partners activities within the EU Member States. Considering your experience within organisations working with evidence management, you have been selected to participate in this Survey. The survey takes and we would 10 minutes on average to complete appreciate your response . by the 12th of September 2021 Please feel free to forward the survey to those colleagues of yours that you think might be appropriate or interested. Thank you in advance for the time you are dedicating to it, your support will be extremely valuable to us! Your personal data shall be processed in compliance with the EU General Data Protection Regulation no. 679/2016 ("GDPR"), in force since 25 May 2018. For more information regarding the Survey platform, please refer to the following privacy statement: https://ec.europa.eu/eusurvey/auth/ps I agree that the data in this survey can be used for the above-mentioned purposes

Upload: others

Post on 04-Dec-2021

4 views

Category:

Documents


0 download

TRANSCRIPT

1

Survey on the use of Artificial Intelligence methods in the evidence management phase of the risk assessment process.

Fields marked with * are mandatory.

Artificial Intelligence approaches in the evidence management phase

The  (EFSA) is the agency of the that provides European Food Safety Authority European Unionindependent scientific advice and communicates on existing and emerging risks associated with the food chain. As part of its mandate, EFSA receives requests for scientific advice mainly from the European

, but also from the . Commission European Parliament or Member StatesIn order to improve the delivery of this mandate, a key priority area for the organisation is to investigate the potential for the application of approaches in the Artificial Intelligence (AI) evidence management

, where the term “evidence management” refers to activities around the process of the risk assessmentcollection, appraisal, assessment and publication of data gathered through data analysis, systematic literature review, and expert consultation. The goal is to deliver a roadmap for action providing recommendations for future multiannual, multi-partner studies or projects in the area of AI approaches, building on EFSA's vision and supporting EFSA's preparedness for future risk assessment requirements and prevent possible divergence on sensitive matters.

The key objective of this survey - promoted by PwC - is to investigate AI applications for in developing a harmonised evidence management in order to provide useful support to EFSA

approach on the implementation of AI methods in these areas by mapping existing and planned developments, i ,dentifying and prioritizing available tools and research activities identifying

, and ichallenges and technical/non-technical barriers dentifying and ranking opportunities for in this area while promoting communication and knowledge transfer collaboration with potential partners

activities within the EU Member States.

Considering your experience within organisations working with evidence management, you have been selected to participate in this Survey. The survey takes and we would 10 minutes on average to completeappreciate your response . by the 12th of September 2021Please feel free to forward the survey to those colleagues of yours that you think might be appropriate or interested. Thank you in advance for the time you are dedicating to it, your support will be extremely valuable to us!

Your personal data shall be processed in compliance with the EU General Data Protection Regulation no. 679/2016 ("GDPR"), in force since 25 May 2018. For more information regarding the Survey platform, please refer to the following privacy statement: https://ec.europa.eu/eusurvey/auth/ps

I agree that the data in this survey can be used for the above-mentioned purposes

2

Section 1: Respondent characteristics

Please indicate your name and surname

Please indicate your email address

Please indicate the name of the organisation you represent

Please indicate your role within the organisation

What kind of organization do you belong to ?Research Organisations/AcademiaNational & International Authorities/OrganisationsPrivate for-profit entity with <250 employees, i.e. SMEPrivate for-profit entity with >250 employees, i.e. Large Enterprise

Section 2: Use of AI

I. Does your organisation apply AI-enabled systems to any parts of the evidence management process?“Evidence management” refers to activities around the collection, appraisal, assessment and publication of data gathered through data analysis, systematic literature review, and expert consultation"

YesNo, but we are planning it for the future.No, and we have no plans to apply it.

*

*

*

*

*

*

3

II. Please indicate any of the following areas in which you are currently either using AI, planning to use AI, or whether it is an area of interest to you. All below-mentioned terms are defined in a popup by clicking the following "?" icon.Terminology mapping: Mapping of terminology to a standard dictionary in situations where discrepancies might occur, e.g., when data is collected from external organisations that may have different terminologies or use a different languageData de-duplication:Automatic detection of duplicated information in structured or unstructured dataAnomaly detection:Identification of outliers in the collected datasetsResearch/Review question evaluation:Automatic evaluation of a document’s structure and content against a standard framework (e.g., research question evaluation against PICO - Population,Interest,Context - or Systematic Literature Review protocol evaluation against Cochrane)Comparable studies/assessment detection:Automatically identify past studies/assessments answering a specific or related question based on semantic analysis.Relevant Keywords identification:Automatic selection of relevant keywords for a literature search/assessment from a collection of documentsInformation clustering/categorization:Automatic clustering/categorization of a document’s contentAbstract screening:Automatically screen abstracts for relevance of content to a specific assessment topicInformation extraction from documents:Automatic extraction of relevant information from scientific papers or other written worksInformation appraisal:Automatic appraisal of information extracted from scientific papers or other written works against existing protocols,frameworks and study methodologies.Information consolidation:Automatic consolidation of information extracted from multiple scientific papers or other written works to enable collective conclusionsQuality assessment/correction of written work: Automatic assessment of the quality of scientific content/narrative of written work and provision of recommendations for improvement.  Identification of risk/confidentiality issues within a document:Automatic identification of sections which may lead to potential risk or confidentiality issues and provision of recommendations for risk reduction.Identification of relevant stakeholders/experts:Automatic identification of stakeholders of relevance to an assessment or research topic (e.g., identification of experts) and extraction of contact details from online sources.Expert ranking:Automatic screening and ranking of relevant stakeholders identified.Evidence sources appraisal:Automatic appraisal of the evidence collected against a research question or assessment topicAutomatic Text Summarisation:Automatic summarisation or analysis of information extracted to support staff in the formation of an opinionUncertainty Identification:Automatic identification of uncertainties that may affect accuracy, ambiguity, statistical estimates from collected and appraised evidence.Uncertainty Prioritisation:Automatic prioritisation/classification/assessment of uncertainties to identify the need for any additional interventions.

4

Automatic report production:Automatic production of reports/ summarization segments

Area UsingPlanning

to UseNot using but

interestedNot using and not interested

Terminology mapping

Data de-duplication

Anomaly detection

Research/Review question evaluation

Comparable studies/assessment detection

Relevant Keywords identification

Information clustering/categorization

Abstract screening

Information extraction from documents

Information appraisal

Information consolidation

Quality assessment/correction of written work

Identification of risk/confidentiality issues within a document

Identification of relevant stakeholders/experts

Expert ranking

Evidence sources appraisal

Automatic Text Summarisation

Uncertainty Identification

Uncertainty Prioritisation

Automatic report production

II. Please indicate any of the following areas in which you are currently either using AI, planning to use AI, or whether it is an area of interest to you. All below-mentioned terms are defined in a popup by clicking the following "?" icon.Terminology mapping: Mapping of terminology to a standard dictionary in situations where discrepancies might occur, e.g., when data is collected from external organisations that may have different terminologies or use a different languageData de-duplication:

*

*

*

*

*

*

*

*

*

*

*

*

*

*

*

*

*

*

*

*

5

Automatic detection of duplicated information in structured or unstructured dataAnomaly detection:Identification of outliers in the collected datasetsResearch/Review question evaluation:Automatic evaluation of a document’s structure and content against a standard framework (e.g., research question evaluation against PICO - Population,Interest,Context - or Systematic Literature Review protocol evaluation against Cochrane)Comparable studies/assessment detection:Automatically identify past studies/assessments answering a specific or related question based on semantic analysis.Relevant Keywords identification:Automatic selection of relevant keywords for a literature search/assessment from a collection of documentsInformation clustering/categorization:Automatic clustering/categorization of a document’s contentAbstract screening:Automatically screen abstracts for relevance of content to a specific assessment topicInformation extraction from documents:Automatic extraction of relevant information from scientific papers or other written worksInformation appraisal:Automatic appraisal of information extracted from scientific papers or other written works against existing protocols,frameworks and study methodologies.Information consolidation:Automatic consolidation of information extracted from multiple scientific papers or other written works to enable collective conclusionsQuality assessment/correction of written work: Automatic assessment of the quality of scientific content/narrative of written work and provision of recommendations for improvement.  Identification of risk/confidentiality issues within a document:Automatic identification of sections which may lead to potential risk or confidentiality issues and provision of recommendations for risk reduction.Identification of relevant stakeholders/experts:Automatic identification of stakeholders of relevance to an assessment or research topic (e.g., identification of experts) and extraction of contact details from online sources.Expert ranking:Automatic screening and ranking of relevant stakeholders identified.Evidence sources appraisal:Automatic appraisal of the evidence collected against a research question or assessment topicAutomatic Text Summarisation:Automatic summarisation or analysis of information extracted to support staff in the formation of an opinionUncertainty Identification:Automatic identification of uncertainties that may affect accuracy, ambiguity, statistical estimates from collected and appraised evidence.Uncertainty Prioritisation:Automatic prioritisation/classification/assessment of uncertainties to identify the need for any additional interventions.Automatic report production:Automatic production of reports/ summarization segments

Area UsingPlanning

to UseNot using but

interestedNot using and not interested

Terminology mapping*

6

Data de-duplication

Anomaly detection

Research/Review question evaluation

Comparable studies/assessment detection

Relevant Keywords identification

Information clustering/categorization

Abstract screening

Information extraction from documents

Information appraisal

Information consolidation

Quality assessment/correction of written work

Identification of risk/confidentiality issues within a document

Identification of relevant stakeholders/experts

Expert ranking

Evidence sources appraisal

Automatic Text Summarisation

Uncertainty Identification

Uncertainty Prioritisation

Automatic report production

Please indicate whether you are using an internally-developed solution or a commercially available one and which one

Please indicate whether you are using an internally-developed solution or a commercially available one and which one

Thank you for taking time to complete this survey

Section 3: Insights

*

*

*

*

*

*

*

*

*

*

*

*

*

*

*

*

*

*

*

*

*

7

I. What are the main benefits experienced/envisaged by your organisation from the use of AI in the above areas?

Time efficiencies/reduced effortImproved quality of outputReduction of human biasImproved transparency and trustworthinessIncreased body of evidence usedOther

Please Elaborate

II. What are the main bottlenecks experienced/envisaged by your organisation from the use of AI in the above areas?

The need to update infrastructureDifficulty in securing internal investmentInformation Security IssuesLegal IssuesIssues around data managementResistance to change from end-usersPoor quality of outputMinimal benefit experiencedIssues around integrating into current workflowsNeed for qualitative evaluation to avoid bias in outputLack of trust in AIOther

Please Elaborate

Section 4: Consent to follow-up

Please indicate whether you would consent to receiving a follow-up call from PwC EU Services EESV to obtain more information within this project

YesNo

Section 2: Research on AI applications in areas of relevance to EFSA

I. Does your institution conduct research on AI applications that may be relevant to evidence management?“Evidence management” refers to activities around the collection, appraisal, assessment and publication of data gathered through data analysis, systematic literature review, and expert consultation

*

*

*

*

*

*

8

YesNo, but we are planning it for the future.No, and we have no plans to apply it.

II. Please indicate any of the following areas in which you conduct scientific research that might be of relevance. All below-mentioned terms are defined in a popup by clicking the following "?" icon.Terminology mapping:Mapping of terminology to a standard dictionary in situations where discrepancies might occur, e.g., when data is collected from external organisations that may have different terminologies or use a different languageData de-duplication:Automatic detection of duplicated information in structured or unstructured dataAnomaly detection:Identification of outliers in the collected datasetsResearch/Review question evaluation:Automatic evaluation of a document’s structure and content against a standard framework (e.g., research question evaluation against PICO - Population,Interest,Context or Systematic Literature Review protocol evaluation against Cochrane)Comparable studies/assessment detection:Automatically identify past studies/assessments answering a specific or related question based on semantic analysis.Relevant Keywords identification:Automatic selection of relevant keywords for a literature search/assessment from a collection of documentsInformation clustering/categorization:Automatic clustering/categorization of a document’s contentAbstract screening:Automatically screen abstracts for relevance of content to a specific assessment topicInformation extraction from documents:Automatic extraction of relevant information from scientific papers or other written worksInformation appraisal:Automatic appraisal of information extracted from scientific papers or other written works against existing protocols,frameworks and study methodologies.Information consolidation:Automatic consolidation of information extracted from multiple scientific papers or other written works to enable collective conclusionsQuality assessment/correction of written work:Automatic assessment of the quality of scientific content/narrative of written work and provision of recommendations for improvement.Identification of risk/confidentiality issues within a document:Automatic identification of sections which may lead to potential risk or confidentiality issues and provision of recommendations for risk reduction.Identification of relevant stakeholders/experts:Automatic identification of stakeholders of relevance to an assessment or research topic (e.g., identification of experts) and extraction of contact details from online sources.Expert ranking:Automatic screening and ranking of relevant stakeholders identified.Evidence sources appraisal:Automatic appraisal of the evidence collected against a research question or assessment topicAutomatic Text Summarisation:Automatic summarisation or analysis of information extracted to support staff in the formation of an opinionUncertainty Identification:

9

Automatic identification of uncertainties that may affect accuracy, ambiguity, statistical estimates from collected and appraised evidence.Uncertainty Prioritisation:Automatic prioritisation/classification/assessment of uncertainties to identify the need for any additional interventions.Automatic report production:Automatic production of reports/ summarization segments

Yes No

Terminology mapping

Data de-duplication

Anomaly detection

Research/Review question evaluation

Comparable studies/assessment detection

Relevant Keywords identification

Information clustering/categorization

Abstract screening

Information extraction from documents

Information appraisal

Information consolidation

Quality assessment/correction of written work

Identification of risk/confidentiality issues within a document

Identification of relevant stakeholders/experts

Expert ranking

Evidence sources appraisal

Automatic Text Summarisation

Uncertainty Identification

Uncertainty Prioritisation

Automatic report production

II. Please indicate any of the following areas in which you conduct scientific research that might be of relevance. All below-mentioned terms are defined in a popup by clicking the following "?" icon.Terminology mapping:Mapping of terminology to a standard dictionary in situations where discrepancies might occur, e.g., when data is collected from external organisations that may have different terminologies or use a different languageData de-duplication:Automatic detection of duplicated information in structured or unstructured data

*

*

*

*

*

*

*

*

*

*

*

*

*

*

*

*

*

*

*

*

10

Anomaly detection:Identification of outliers in the collected datasetsResearch/Review question evaluation:Automatic evaluation of a document’s structure and content against a standard framework (e.g., research question evaluation against PICO - Population,Interest,Context or Systematic Literature Review protocol evaluation against Cochrane)Comparable studies/assessment detection:Automatically identify past studies/assessments answering a specific or related question based on semantic analysis.Relevant Keywords identification:Automatic selection of relevant keywords for a literature search/assessment from a collection of documentsInformation clustering/categorization:Automatic clustering/categorization of a document’s contentAbstract screening:Automatically screen abstracts for relevance of content to a specific assessment topicInformation extraction from documents:Automatic extraction of relevant information from scientific papers or other written worksInformation appraisal:Automatic appraisal of information extracted from scientific papers or other written works against existing protocols,frameworks and study methodologies.Information consolidation:Automatic consolidation of information extracted from multiple scientific papers or other written works to enable collective conclusionsQuality assessment/correction of written work:Automatic assessment of the quality of scientific content/narrative of written work and provision of recommendations for improvement.Identification of risk/confidentiality issues within a document:Automatic identification of sections which may lead to potential risk or confidentiality issues and provision of recommendations for risk reduction.Identification of relevant stakeholders/experts:Automatic identification of stakeholders of relevance to an assessment or research topic (e.g., identification of experts) and extraction of contact details from online sources.Expert ranking:Automatic screening and ranking of relevant stakeholders identified.Evidence sources appraisal:Automatic appraisal of the evidence collected against a research question or assessment topicAutomatic Text Summarisation:Automatic summarisation or analysis of information extracted to support staff in the formation of an opinionUncertainty Identification:Automatic identification of uncertainties that may affect accuracy, ambiguity, statistical estimates from collected and appraised evidence.Uncertainty Prioritisation:Automatic prioritisation/classification/assessment of uncertainties to identify the need for any additional interventions.Automatic report production:Automatic production of reports/ summarization segments

Yes No

Terminology mapping

Data de-duplication

*

*

11

Anomaly detection

Research/Review question evaluation

Comparable studies/assessment detection

Relevant Keywords identification

Information clustering/categorization

Abstract screening

Information extraction from documents

Information appraisal

Information consolidation

Quality assessment/correction of written work

Identification of risk/confidentiality issues within a document

Identification of relevant stakeholders/experts

Expert ranking

Evidence sources appraisal

Automatic Text Summarisation

Uncertainty Identification

Uncertainty Prioritisation

Automatic report production

Please indicate the stage of the development of the solution regarding Terminology Mapping, and any relevant public links.

Please indicate the stage of the development of the solution regarding Data de-duplication, and any relevant public links.

Please indicate the stage of the development of the solution regarding Anomaly Detection, and any relevant public links.

Please indicate the stage of the development of the solution regarding Research/Review question evaluation, and any relevant public links.

*

*

*

*

*

*

*

*

*

*

*

*

*

*

*

*

*

*

*

*

*

12

Please indicate the stage of the development of the solution regarding Comparable studies/assessment detection, and any relevant public links.

Please indicate the stage of the development of the solution regarding  Relevant Keywords identification, and any relevant public links.

Please indicate the stage of the development of the solution regarding Information clustering/categorization, and any relevant public links.

Please indicate the stage of the development of the solution regarding Abstract screening, and any relevant public links.

Please indicate the stage of the development of the solution regarding Information extraction from documents, and any relevant public links.

Please indicate the stage of the development of the solution regarding Information appraisal, and any relevant public links.

Please indicate the stage of the development of the solution regarding Information consolidation, and any relevant public links.

Please indicate the stage of the development of the solution regarding Quality assessment/correction of written work, and any relevant public links.

Please indicate the stage of the development of the solution regarding Identification of risk/confidentiality issues within a document, and any relevant public links.

*

*

*

*

*

*

*

*

*

13

Please indicate the stage of the development of the solution regarding Identification of relevant stakeholders/experts, and any relevant public links

Please indicate the stage of the development of the solution regarding Expert ranking, and any relevant public links

Please indicate the stage of the development of the solution regarding Evidence sources appraisal, and any relevant public links

Please indicate the stage of the development of the solution regarding Automatic Text Summarization, and any relevant public links

Please indicate the stage of the development of the solution regarding Uncertainty Identification, and any relevant public links

Please indicate the stage of the development of the solution regarding Uncertainty Prioritisation, and any relevant public links

Please indicate the stage of the development of the solution regarding Automatic report production, and any relevant public links

Thank you for taking time to complete this survey

Section 3: Insights

I. What are the main benefits experienced/envisaged by your organisation from the use of AI in the above areas?

Time efficiencies/reduced effortImproved quality of outputReduction of human biasImproved transparency and trustworthiness

*

*

*

*

*

*

*

*

14

Increased body of evidence usedOther

Please Elaborate

II. What are the main bottlenecks in terms of the development of AI solutions in the evidence management phase?

Lack of dataLack of subject-specific ontologiesLack of generalisationNo formalised process of evaluating the performance of such systemsData extraction from commercial publishers is restrictedNeed for qualitative evaluation to avoid bias in outputLinguistic ambiguities lead to errorsLack of funding opportunitiesLack of trust in AIOther

Please Elaborate

Section 4: Consent to follow-up

Please indicate whether you would consent to receiving a follow-up call from PwC EU Services EESV to obtain more information within this project.

YesNo

Section 2: Development of AI-enabled tools that may be relevant to EFSA’s work

I. Does your company develop products utilizing AI for the evidence management process?“Evidence management” refers to activities around the collection, appraisal, assessment and publication of data gathered through data analysis, systematic literature review, and expert consultation

YesNo, but we are planning it for the futureNo

II. Please indicate any of the following areas in which you develop or plan to develop a products that might be of relevance. All below-mentioned terms are defined in a popup by clicking the following "?" icon.

*

*

*

*

*

15

Terminology mapping:Mapping of terminology to a standard dictionary in situations where discrepancies might occur, e.g., when data is collected from external organisations that may have different terminologies or use a different languageData de-duplication:Automatic detection of duplicated information in structured or unstructured dataAnomaly detection:Identification of outliers in the collected datasetsResearch/Review question evaluation:Automatic evaluation of a document’s structure and content against a standard framework (e.g., research question evaluation against PICO - Population,Interest,Context or Systematic Literature Review protocol evaluation against Cochrane)Comparable studies/assessment detection:Automatically identify past studies/assessments answering a specific or related question based on semantic analysis.Relevant Keywords identification:Automatic selection of relevant keywords for a literature search/assessment from a collection of documentsInformation clustering/categorization:Automatic clustering/categorization of a document’s contentAbstract screening:Automatically screen abstracts for relevance of content to a specific assessment topicInformation extraction from documents:Automatic extraction of relevant information from scientific papers or other written worksInformation appraisal:Automatic appraisal of information extracted from scientific papers or other written works against existing protocols,frameworks and study methodologies.Information consolidation:Automatic consolidation of information extracted from multiple scientific papers or other written works to enable collective conclusionsQuality assessment/correction of written work:Automatic assessment of the quality of scientific content/narrative of written work and provision of recommendations for improvement.Identification of risk/confidentiality issues within a document:Automatic identification of sections which may lead to potential risk or confidentiality issues and provision of recommendations for risk reduction.Identification of relevant stakeholders/experts:Automatic identification of stakeholders of relevance to an assessment or research topic (e.g., identification of experts) and extraction of contact details from online sources.Expert ranking:Automatic screening and ranking of relevant stakeholders identified.Evidence sources appraisal:Automatic appraisal of the evidence collected against a research question or assessment topicAutomatic Text Summarisation:Automatic summarisation or analysis of information extracted to support staff in the formation of an opinionUncertainty Identification:Automatic identification of uncertainties that may affect accuracy, ambiguity, statistical estimates from collected and appraised evidence.Uncertainty Prioritisation:Automatic prioritisation/classification/assessment of uncertainties to identify the need for any additional interventions.Automatic report production:Automatic production of reports/ summarization segments

16

Yes No

Terminology mapping

Data de-duplication

Anomaly detection

Research/Review question evaluation

Comparable studies/assessment detection

Relevant Keywords identification

Information clustering/categorization

Abstract screening

Information extraction from documents

Information appraisal

Information consolidation

Quality assessment/correction of written work

Identification of risk/confidentiality issues within a document

Identification of relevant stakeholders/experts

Expert ranking

Evidence sources appraisal

Automatic Text Summarisation

Uncertainty Identification

Uncertainty Prioritisation

Automatic report production

II. Please indicate any of the following areas in which you develop or plan to develop a products that might be of relevance.All below-mentioned terms are defined in a popup by clicking the following "?" icon.Terminology mapping:Mapping of terminology to a standard dictionary in situations where discrepancies might occur, e.g., when data is collected from external organisations that may have different terminologies or use a different languageData de-duplication:Automatic detection of duplicated information in structured or unstructured dataAnomaly detection:Identification of outliers in the collected datasetsResearch/Review question evaluation:Automatic evaluation of a document’s structure and content against a standard framework (e.g., research question evaluation against PICO - Population,Interest,Context or Systematic Literature Review protocol evaluation against Cochrane)Comparable studies/assessment detection:

*

*

*

*

*

*

*

*

*

*

*

*

*

*

*

*

*

*

*

*

17

Automatically identify past studies/assessments answering a specific or related question based on semantic analysis.Relevant Keywords identification:Automatic selection of relevant keywords for a literature search/assessment from a collection of documentsInformation clustering/categorization:Automatic clustering/categorization of a document’s contentAbstract screening:Automatically screen abstracts for relevance of content to a specific assessment topicInformation extraction from documents:Automatic extraction of relevant information from scientific papers or other written worksInformation appraisal:Automatic appraisal of information extracted from scientific papers or other written works against existing protocols,frameworks and study methodologies.Information consolidation:Automatic consolidation of information extracted from multiple scientific papers or other written works to enable collective conclusionsQuality assessment/correction of written work:Automatic assessment of the quality of scientific content/narrative of written work and provision of recommendations for improvement.Identification of risk/confidentiality issues within a document:Automatic identification of sections which may lead to potential risk or confidentiality issues and provision of recommendations for risk reduction.Identification of relevant stakeholders/experts:Automatic identification of stakeholders of relevance to an assessment or research topic (e.g., identification of experts) and extraction of contact details from online sources.Expert ranking:Automatic screening and ranking of relevant stakeholders identified.Evidence sources appraisal:Automatic appraisal of the evidence collected against a research question or assessment topicAutomatic Text Summarisation:Automatic summarisation or analysis of information extracted to support staff in the formation of an opinionUncertainty Identification:Automatic identification of uncertainties that may affect accuracy, ambiguity, statistical estimates from collected and appraised evidence.Uncertainty Prioritisation:Automatic prioritisation/classification/assessment of uncertainties to identify the need for any additional interventions.Automatic report production:Automatic production of reports/ summarization segments

Yes No

Terminology mapping

Data de-duplication

Anomaly detection

Research/Review question evaluation

Comparable studies/assessment detection

Relevant Keywords identification

*

*

*

*

*

*

18

Information clustering/categorization

Abstract screening

Information extraction from documents

Information appraisal

Information consolidation

Quality assessment/correction of written work

Identification of risk/confidentiality issues within a document

Identification of relevant stakeholders/experts

Expert ranking

Evidence sources appraisal

Automatic Text Summarisation

Uncertainty Identification

Uncertainty Prioritisation

Automatic report production

Please indicate the stage of the development of the product (R&D, Commercialized, etc) regarding Terminology Mapping, and any relevant links providing description of your product.

Please indicate the stage of the development of the product (R&D, Commercialized, etc) regarding Data de-duplication, and any relevant links providing description of your product.

Please indicate the stage of the development of the product (R&D, Commercialized, etc) regarding Anomaly Detection, and any relevant links providing description of your product.

Please indicate the stage of the development of the product (R&D, Commercialized, etc) regarding Research/Review question evaluation, and any relevant links providing description of your product.

Please indicate the stage of the development of the product (R&D, Commercialized, etc) regarding Comparable studies/assessment detection, and any relevant links providing description of your product.

*

*

*

*

*

*

*

*

*

*

*

*

*

*

*

*

*

*

19

Please indicate the stage of the development of the product (R&D, Commercialized, etc) regarding Relevant Keywords identification, and any relevant links providing description of your product.

Please indicate the stage of the development of the product (R&D, Commercialized, etc) regarding Information clustering/categorization, and any relevant links providing description of your product.

Please indicate the stage of the development of the product (R&D, Commercialized, etc) regarding Abstract Screening, and any relevant links providing description of your product.

Please indicate the stage of the development of the product (R&D, Commercialized, etc) regarding Information extraction from documents, and any relevant links providing description of your product.

Please indicate the stage of the development of the product (R&D, Commercialized, etc) regarding Information appraisal, and any relevant links providing description of your product.

Please indicate the stage of the development of the product (R&D, Commercialized, etc) regarding Information consolidation, and any relevant links providing description of your product.

Please indicate the stage of the development of the product (R&D, Commercialized, etc) regarding Quality assessment/correction of written work, and any relevant links providing description of your product.

Please indicate the stage of the development of the product (R&D, Commercialized, etc) regarding Identification of risk/confidentiality issues within a document, and any relevant links providing description of your product.

Please indicate the stage of the development of the product (R&D, Commercialized, etc) regarding Identification of relevant stakeholders/experts, and any relevant links providing description of your product.

*

*

*

*

*

*

*

*

*

20

Please indicate the stage of the development of the product (R&D, Commercialized, etc) regarding Expert Ranking, and any relevant links providing description of your product.

Please indicate the stage of the development of the product (R&D, Commercialized, etc) regarding Evidence sources appraisal, and any relevant links providing description of your product.

Please indicate the stage of the development of the product (R&D, Commercialized, etc) regarding Automatic Text Summarisation, and any relevant links providing description of your product.

Please indicate the stage of the development of the product (R&D, Commercialized, etc) regarding Uncertainty Identification, and any relevant links providing description of your product.

Please indicate the stage of the development of the product (R&D, Commercialized, etc) regarding Uncertainty Prioritisation, and any relevant links providing description of your product.

Please indicate the stage of the development of the product (R&D, Commercialized, etc) regarding Automatic report production, and any relevant links providing description of your product.

Thank you for taking time to complete this survey

Section 3: Insights

I. What are the main benefits experienced/envisaged by your company's from AI products in the above areas?

Time efficiencies/reduced effortImproved quality of outputReduction of human biasImproved transparency and trustworthinessIncreased body of evidence usedOther

Please Elaborate

*

*

*

*

*

*

*

*

21

II. What are the main bottlenecks in terms of the development of AI solutions in the evidence management phase?

Lack of dataLack of subject-specific ontologiesLack of generalisationNo formalised process of evaluating the performance of such systemsData extraction from commercial publishers is restrictedNeed for qualitative evaluation to avoid bias in outputLinguistic ambiguities lead to errorsLack of funding opportunitiesOther

Please Elaborate

Section 4: Consent to follow-up

Please indicate whether you would consent to receiving a follow-up call from PwC EU Services EESV to obtain more information within this project?

YesNo

*

*

*