Skip to content

[Perf] Optimize documentation lints **a lot** (1/2) (18% -> 10%) #14693

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
May 21, 2025
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
70 changes: 55 additions & 15 deletions clippy_lints/src/doc/markdown.rs
Original file line number Diff line number Diff line change
Expand Up @@ -6,13 +6,15 @@ use rustc_lint::LateContext;
use rustc_span::{BytePos, Pos, Span};
use url::Url;

use crate::doc::DOC_MARKDOWN;
use crate::doc::{DOC_MARKDOWN, Fragments};
use std::ops::{ControlFlow, Range};

pub fn check(
cx: &LateContext<'_>,
valid_idents: &FxHashSet<String>,
text: &str,
span: Span,
fragments: &Fragments<'_>,
fragment_range: Range<usize>,
code_level: isize,
blockquote_level: isize,
) {
Expand Down Expand Up @@ -64,23 +66,38 @@ pub fn check(
close_parens += 1;
}

// Adjust for the current word
let offset = word.as_ptr() as usize - text.as_ptr() as usize;
let span = Span::new(
span.lo() + BytePos::from_usize(offset),
span.lo() + BytePos::from_usize(offset + word.len()),
span.ctxt(),
span.parent(),
);
// We'll use this offset to calculate the span to lint.
let fragment_offset = word.as_ptr() as usize - text.as_ptr() as usize;

check_word(cx, word, span, code_level, blockquote_level);
// Adjust for the current word
if check_word(
cx,
word,
fragments,
&fragment_range,
fragment_offset,
code_level,
blockquote_level,
)
.is_break()
{
return;
}
}
}

fn check_word(cx: &LateContext<'_>, word: &str, span: Span, code_level: isize, blockquote_level: isize) {
fn check_word(
cx: &LateContext<'_>,
word: &str,
fragments: &Fragments<'_>,
range: &Range<usize>,
fragment_offset: usize,
code_level: isize,
blockquote_level: isize,
) -> ControlFlow<()> {
/// Checks if a string is upper-camel-case, i.e., starts with an uppercase and
/// contains at least two uppercase letters (`Clippy` is ok) and one lower-case
/// letter (`NASA` is ok).
/// letter (`NASA` is ok).[
/// Plurals are also excluded (`IDs` is ok).
fn is_camel_case(s: &str) -> bool {
if s.starts_with(|c: char| c.is_ascii_digit() | c.is_ascii_lowercase()) {
Expand Down Expand Up @@ -117,6 +134,17 @@ fn check_word(cx: &LateContext<'_>, word: &str, span: Span, code_level: isize, b
// try to get around the fact that `foo::bar` parses as a valid URL
&& !url.cannot_be_a_base()
{
let Some(fragment_span) = fragments.span(cx, range.clone()) else {
return ControlFlow::Break(());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems wrong. One spot failing to get a span doesn't mean all the others will.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed!

};

let span = Span::new(
fragment_span.lo() + BytePos::from_usize(fragment_offset),
fragment_span.lo() + BytePos::from_usize(fragment_offset + word.len()),
fragment_span.ctxt(),
fragment_span.parent(),
);

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should you not be adjusting the range before creating the span? fragment_offset looks like it's an offset in the markdown text.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure if I understand this comment correctly. This snippet is taken as-is from check with variable names fixed, check->offset didn't really care about the markdown text.

Copy link
Contributor

@Jarcho Jarcho May 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fragment_offset looks like it's an offset in the cooked doc string. It can't be used as an offset for a span since that doesn't always line up perfectly with the source text.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After testing this out, text_to_check only contains text, it doesn't contain links, or bold text, etc. And fragment_offset is resetted for each one of those texts. I can add a debug assertion for future proofing this though.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A text fragment can still contain escape sequences e.g. #[doc = "docs with unicode \u{xxxxxx}"]. The string the fragments work on is the cooked version of the doc string, not the source form. Multiline comments (/** */) might also have issues, don't know how the those are presented.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just tested this, #[doc = "*"] does not lint, at all. Even for the test cases documented, if you change the /// * for #[doc = "*"] they will just not lint.

For the case of /** XXX */, seems that everything works correctly (or that I'm testing for the wrong thing), either way, I've added some tests for this along with some other weird escaped sequences.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't really want to stall the PR on this and it's not a new thing added. I'll try to make it break later.

span_lint_and_sugg(
cx,
DOC_MARKDOWN,
Expand All @@ -126,17 +154,28 @@ fn check_word(cx: &LateContext<'_>, word: &str, span: Span, code_level: isize, b
format!("<{word}>"),
Applicability::MachineApplicable,
);
return;
return ControlFlow::Continue(());
}

// We assume that mixed-case words are not meant to be put inside backticks. (Issue #2343)
//
// We also assume that backticks are not necessary if inside a quote. (Issue #10262)
if code_level > 0 || blockquote_level > 0 || (has_underscore(word) && has_hyphen(word)) {
return;
return ControlFlow::Break(());
}

if has_underscore(word) || word.contains("::") || is_camel_case(word) || word.ends_with("()") {
let Some(fragment_span) = fragments.span(cx, range.clone()) else {
return ControlFlow::Break(());
};

let span = Span::new(
fragment_span.lo() + BytePos::from_usize(fragment_offset),
fragment_span.lo() + BytePos::from_usize(fragment_offset + word.len()),
fragment_span.ctxt(),
fragment_span.parent(),
);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as the previous two comments.


span_lint_and_then(
cx,
DOC_MARKDOWN,
Expand All @@ -149,4 +188,5 @@ fn check_word(cx: &LateContext<'_>, word: &str, span: Span, code_level: isize, b
},
);
}
ControlFlow::Continue(())
}
9 changes: 5 additions & 4 deletions clippy_lints/src/doc/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -730,7 +730,10 @@ struct Fragments<'a> {
}

impl Fragments<'_> {
fn span(self, cx: &LateContext<'_>, range: Range<usize>) -> Option<Span> {
/// get the span for the markdown range. Note that this function is not cheap, use it with
/// caution.
#[must_use]
fn span(&self, cx: &LateContext<'_>, range: Range<usize>) -> Option<Span> {
source_span_for_markdown_range(cx.tcx, self.doc, &range, self.fragments)
}
}
Expand Down Expand Up @@ -1068,9 +1071,7 @@ fn check_doc<'a, Events: Iterator<Item = (pulldown_cmark::Event<'a>, Range<usize
);
} else {
for (text, range, assoc_code_level) in text_to_check {
if let Some(span) = fragments.span(cx, range) {
markdown::check(cx, valid_idents, &text, span, assoc_code_level, blockquote_level);
}
markdown::check(cx, valid_idents, &text, &fragments, range, assoc_code_level, blockquote_level);
}
}
text_to_check = Vec::new();
Expand Down