Skip to content

Commit 1e47028

Browse files
author
Matthew Wilcox (Oracle)
committed
readahead: Update comments
- Refer to folios where appropriate, not pages (Matthew Wilcox) - Eliminate references to the internal PG_readhead - Use "readahead" consistently - not "read-ahead" or "read ahead" (mostly Neil Brown) - Clarify some sections that, on reflection, weren't very clear (Neil Brown) - Minor punctuation/spelling fixes (Neil Brown) Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
1 parent b4e089d commit 1e47028

1 file changed

Lines changed: 45 additions & 47 deletions

File tree

mm/readahead.c

Lines changed: 45 additions & 47 deletions
Original file line numberDiff line numberDiff line change
@@ -13,29 +13,29 @@
1313
*
1414
* Readahead is used to read content into the page cache before it is
1515
* explicitly requested by the application. Readahead only ever
16-
* attempts to read pages that are not yet in the page cache. If a
17-
* page is present but not up-to-date, readahead will not try to read
16+
* attempts to read folios that are not yet in the page cache. If a
17+
* folio is present but not up-to-date, readahead will not try to read
1818
* it. In that case a simple ->readpage() will be requested.
1919
*
2020
* Readahead is triggered when an application read request (whether a
21-
* systemcall or a page fault) finds that the requested page is not in
21+
* system call or a page fault) finds that the requested folio is not in
2222
* the page cache, or that it is in the page cache and has the
23-
* %PG_readahead flag set. This flag indicates that the page was loaded
24-
* as part of a previous read-ahead request and now that it has been
25-
* accessed, it is time for the next read-ahead.
23+
* readahead flag set. This flag indicates that the folio was read
24+
* as part of a previous readahead request and now that it has been
25+
* accessed, it is time for the next readahead.
2626
*
2727
* Each readahead request is partly synchronous read, and partly async
28-
* read-ahead. This is reflected in the struct file_ra_state which
29-
* contains ->size being to total number of pages, and ->async_size
30-
* which is the number of pages in the async section. The first page in
31-
* this async section will have %PG_readahead set as a trigger for a
32-
* subsequent read ahead. Once a series of sequential reads has been
28+
* readahead. This is reflected in the struct file_ra_state which
29+
* contains ->size being the total number of pages, and ->async_size
30+
* which is the number of pages in the async section. The readahead
31+
* flag will be set on the first folio in this async section to trigger
32+
* a subsequent readahead. Once a series of sequential reads has been
3333
* established, there should be no need for a synchronous component and
34-
* all read ahead request will be fully asynchronous.
34+
* all readahead request will be fully asynchronous.
3535
*
36-
* When either of the triggers causes a readahead, three numbers need to
37-
* be determined: the start of the region, the size of the region, and
38-
* the size of the async tail.
36+
* When either of the triggers causes a readahead, three numbers need
37+
* to be determined: the start of the region to read, the size of the
38+
* region, and the size of the async tail.
3939
*
4040
* The start of the region is simply the first page address at or after
4141
* the accessed address, which is not currently populated in the page
@@ -45,14 +45,14 @@
4545
* was explicitly requested from the determined request size, unless
4646
* this would be less than zero - then zero is used. NOTE THIS
4747
* CALCULATION IS WRONG WHEN THE START OF THE REGION IS NOT THE ACCESSED
48-
* PAGE.
48+
* PAGE. ALSO THIS CALCULATION IS NOT USED CONSISTENTLY.
4949
*
5050
* The size of the region is normally determined from the size of the
5151
* previous readahead which loaded the preceding pages. This may be
5252
* discovered from the struct file_ra_state for simple sequential reads,
5353
* or from examining the state of the page cache when multiple
5454
* sequential reads are interleaved. Specifically: where the readahead
55-
* was triggered by the %PG_readahead flag, the size of the previous
55+
* was triggered by the readahead flag, the size of the previous
5656
* readahead is assumed to be the number of pages from the triggering
5757
* page to the start of the new readahead. In these cases, the size of
5858
* the previous readahead is scaled, often doubled, for the new
@@ -65,52 +65,52 @@
6565
* larger than the current request, and it is not scaled up, unless it
6666
* is at the start of file.
6767
*
68-
* In general read ahead is accelerated at the start of the file, as
68+
* In general readahead is accelerated at the start of the file, as
6969
* reads from there are often sequential. There are other minor
70-
* adjustments to the read ahead size in various special cases and these
70+
* adjustments to the readahead size in various special cases and these
7171
* are best discovered by reading the code.
7272
*
73-
* The above calculation determines the readahead, to which any requested
74-
* read size may be added.
73+
* The above calculation, based on the previous readahead size,
74+
* determines the size of the readahead, to which any requested read
75+
* size may be added.
7576
*
7677
* Readahead requests are sent to the filesystem using the ->readahead()
7778
* address space operation, for which mpage_readahead() is a canonical
7879
* implementation. ->readahead() should normally initiate reads on all
79-
* pages, but may fail to read any or all pages without causing an IO
80+
* folios, but may fail to read any or all folios without causing an I/O
8081
* error. The page cache reading code will issue a ->readpage() request
81-
* for any page which ->readahead() does not provided, and only an error
82+
* for any folio which ->readahead() did not read, and only an error
8283
* from this will be final.
8384
*
84-
* ->readahead() will generally call readahead_page() repeatedly to get
85-
* each page from those prepared for read ahead. It may fail to read a
86-
* page by:
85+
* ->readahead() will generally call readahead_folio() repeatedly to get
86+
* each folio from those prepared for readahead. It may fail to read a
87+
* folio by:
8788
*
88-
* * not calling readahead_page() sufficiently many times, effectively
89-
* ignoring some pages, as might be appropriate if the path to
89+
* * not calling readahead_folio() sufficiently many times, effectively
90+
* ignoring some folios, as might be appropriate if the path to
9091
* storage is congested.
9192
*
92-
* * failing to actually submit a read request for a given page,
93+
* * failing to actually submit a read request for a given folio,
9394
* possibly due to insufficient resources, or
9495
*
9596
* * getting an error during subsequent processing of a request.
9697
*
97-
* In the last two cases, the page should be unlocked to indicate that
98-
* the read attempt has failed. In the first case the page will be
99-
* unlocked by the caller.
98+
* In the last two cases, the folio should be unlocked by the filesystem
99+
* to indicate that the read attempt has failed. In the first case the
100+
* folio will be unlocked by the VFS.
100101
*
101-
* Those pages not in the final ``async_size`` of the request should be
102+
* Those folios not in the final ``async_size`` of the request should be
102103
* considered to be important and ->readahead() should not fail them due
103104
* to congestion or temporary resource unavailability, but should wait
104105
* for necessary resources (e.g. memory or indexing information) to
105-
* become available. Pages in the final ``async_size`` may be
106+
* become available. Folios in the final ``async_size`` may be
106107
* considered less urgent and failure to read them is more acceptable.
107-
* In this case it is best to use delete_from_page_cache() to remove the
108-
* pages from the page cache as is automatically done for pages that
109-
* were not fetched with readahead_page(). This will allow a
110-
* subsequent synchronous read ahead request to try them again. If they
108+
* In this case it is best to use filemap_remove_folio() to remove the
109+
* folios from the page cache as is automatically done for folios that
110+
* were not fetched with readahead_folio(). This will allow a
111+
* subsequent synchronous readahead request to try them again. If they
111112
* are left in the page cache, then they will be read individually using
112-
* ->readpage().
113-
*
113+
* ->readpage() which may be less efficient.
114114
*/
115115

116116
#include <linux/kernel.h>
@@ -157,7 +157,7 @@ static void read_pages(struct readahead_control *rac)
157157
aops->readahead(rac);
158158
/*
159159
* Clean up the remaining pages. The sizes in ->ra
160-
* maybe be used to size next read-ahead, so make sure
160+
* may be used to size the next readahead, so make sure
161161
* they accurately reflect what happened.
162162
*/
163163
while ((page = readahead_page(rac))) {
@@ -420,7 +420,7 @@ static pgoff_t count_history_pages(struct address_space *mapping,
420420
}
421421

422422
/*
423-
* page cache context based read-ahead
423+
* page cache context based readahead
424424
*/
425425
static int try_context_readahead(struct address_space *mapping,
426426
struct file_ra_state *ra,
@@ -671,9 +671,9 @@ void page_cache_sync_ra(struct readahead_control *ractl,
671671
bool do_forced_ra = ractl->file && (ractl->file->f_mode & FMODE_RANDOM);
672672

673673
/*
674-
* Even if read-ahead is disabled, issue this request as read-ahead
674+
* Even if readahead is disabled, issue this request as readahead
675675
* as we'll need it to satisfy the requested range. The forced
676-
* read-ahead will do the right thing and limit the read to just the
676+
* readahead will do the right thing and limit the read to just the
677677
* requested range, which we'll set to 1 page for this case.
678678
*/
679679
if (!ractl->ra->ra_pages || blk_cgroup_congested()) {
@@ -689,15 +689,14 @@ void page_cache_sync_ra(struct readahead_control *ractl,
689689
return;
690690
}
691691

692-
/* do read-ahead */
693692
ondemand_readahead(ractl, NULL, req_count);
694693
}
695694
EXPORT_SYMBOL_GPL(page_cache_sync_ra);
696695

697696
void page_cache_async_ra(struct readahead_control *ractl,
698697
struct folio *folio, unsigned long req_count)
699698
{
700-
/* no read-ahead */
699+
/* no readahead */
701700
if (!ractl->ra->ra_pages)
702701
return;
703702

@@ -712,7 +711,6 @@ void page_cache_async_ra(struct readahead_control *ractl,
712711
if (blk_cgroup_congested())
713712
return;
714713

715-
/* do read-ahead */
716714
ondemand_readahead(ractl, folio, req_count);
717715
}
718716
EXPORT_SYMBOL_GPL(page_cache_async_ra);

0 commit comments

Comments
 (0)