(50 minutes to learn)
The F measure (F1 score or F score) is a measure of a test's accuracy and is defined as the weighted harmonic mean of the precision and recall of the test.
This concept has the prerequisites:
- Understand why the harmonic mean of the precision and recall is used instead of the arithmetic mean
- Should a web search engine, such as Google, favor high precision or high recall for its top 10 search results? What weights should they then use for the F measure?
Core resources (read/watch one of the following)
→ Introduction to Information Retrieval
A textbook on information retrieval techniques.
Location: "Evaluation of Unranked Retrieval Sets"
Supplemental resources (the following are optional, but you may find them useful)
Location: Article: F1 score
- accuracy is the "traditional" way to measure the performance of a system but equally weights the positive and negative results, which may not be desirable in an information retrieval system, as the number of negative results (non relevant results) can vastly outweigh the number of positive results (relevant results). However, the F measure is not invariant to label swapping (switching the positive and negative classes) which may not be desirable when e.g. using the F measure with a classifier.
- create concept: shift + click on graph
- change concept title: shift + click on existing concept
- link together concepts: shift + click drag from one concept to another
- remove concept from graph: click on concept then press delete/backspace
- add associated content to concept: click the small circle that appears on the node when hovering over it
- other actions: use the icons in the upper right corner to optimize the graph placement, preview the graph, or download a json representation