From 52fe371f3f7ff8f0dfab044847c9c85e8601ba21 Mon Sep 17 00:00:00 2001 From: Martin Kronbichler Date: Tue, 4 Jun 2019 14:14:39 +0200 Subject: [PATCH] Fix typo in link for the step-17 introduction --- examples/step-17/doc/intro.dox | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/examples/step-17/doc/intro.dox b/examples/step-17/doc/intro.dox index 4db26fa5e8..35b9600635 100644 --- a/examples/step-17/doc/intro.dox +++ b/examples/step-17/doc/intro.dox @@ -72,7 +72,7 @@ In general, to be truly able to scale to large numbers of processors, one needs to split between the available processors every data structure whose size scales with the size of the overall problem. (For a definition of what it means for a program to "scale", see -@ref GlossParallelScaling "this glossary entry.) This includes, for +@ref GlossParallelScaling "this glossary entry".) This includes, for example, the triangulation, the matrix, and all global vectors (solution, right hand side). If one doesn't split all of these objects, one of those will be replicated on all processors and will eventually simply become too large -- 2.39.5