# Expert scoring algorithms

## Expert scores

FlexReview’s reviewer suggestion and expert review requirements are based on expert scores. The score is calculated for every user and for every file based on the file modification history.

<figure><img src="https://273246003-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FOAPqUQVbLbsfI5YESl32%2Fuploads%2FGigW75EfCuz4qQ5ChXJI%2FUntitled%20(3).png?alt=media&#x26;token=9a470031-9b4e-4b09-a920-d8c54d9fc678" alt="" width="416"><figcaption><p>Public domain: https://commons.wikimedia.org/wiki/File:ForgettingCurve.svg</p></figcaption></figure>

This score calculation logic takes into account many factors. However, we are taking a simple approach that is based on the [<mark style="color:blue;">Forgetting curve</mark>](https://en.wikipedia.org/wiki/Forgetting_curve). The forgetting curve shows how much information / memory is lost over the time in human’s memory. We apply this to the GitHub pull request code authoring and reviewing history. This allows us to build an estimator that reflects both recency and accumulated knowledge based on the past contributions.

## Review load

Review load for a user is basically calculated as "number of PRs interacted in the past week", but we add a modification to it.

<figure><img src="https://273246003-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FOAPqUQVbLbsfI5YESl32%2Fuploads%2Fjy6weiZFacGYCAnNah4B%2Fimage.png?alt=media&#x26;token=48d9478c-4262-41cf-8f19-e90b810ec789" alt=""><figcaption></figcaption></figure>

Whenever a user interacts with a PR (e.g. leaving a review / comment), the user is considered to have 1 review load. This number decreases over time for 7 days. For example, when a user leaves a review for a PR, that adds 1 review load. After 3.5 days have passed, the review load is decreased to 0.5 for that PR. This review load counting is done per PR and user, and the review load for a user at any point is sum of those PR-User load. This way, the review load of a person will gradually change rather than abruptly going up and down.

## Expert load-balancing

Expert load-balancing is an assignment method that combines expertise assignment and load-balancing assignment.

<figure><img src="https://273246003-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FOAPqUQVbLbsfI5YESl32%2Fuploads%2FyTsyW9izShmu7WRXWa1h%2Fimage.png?alt=media&#x26;token=0d7aeec8-4e0d-4915-9a3a-1335a956d3af" alt=""><figcaption></figcaption></figure>

Based on the expert scores, we make two clusters of the users; high score users and low score users. Among the high score users, it assigns a reviewer to balance out the review loads.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.aviator.co/flexreview/reference/expert-scoring-algorithms.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
