--- /dev/null
+<a name="Intro"></a>
+<h1>Introduction</h1>
--- /dev/null
+\documentclass{article}
+\usepackage{amsmath}
+
+\begin{document}
+
+\section{Foreword}
+
+This program demonstrates a number of techniques that have not been shown in
+previous example programs. In particular, it shows how to program for
+one-dimensional problems, and some aspects of what to do with nonlinear
+problems, in particular how to transfer the solution from one grid to the next
+finer one. Apart from this, however, the program does not attempt to do much
+more than to entertain those who sometimes like to play with maths.
+
+\section{Introduction}
+
+In the book by Dacorogna on the Calculus of Variations, I found the following
+statement, which confused me tremendously at first (see Section 3.4.3,
+``Lavrentiev Phenomenon'', very slightly edited):
+
+\begin{quote}
+ \textbf{Theorem 4.6:} Let
+ $$I(u)=\int_0^1 (x-u^3)^2 (u')^6\; dx.$$
+ Let
+ $$
+ {\cal W}_1 = \{ u\in W^{1,\infty}(0,1) : u(0)=0, u(1)=1 \}
+ $$ $$
+ {\cal W}_2 = \{ u\in W^{1,1}(0,1) : u(0)=0, u(1)=1 \}
+ $$
+
+ Then
+ $$
+ \inf_{u\in {\cal W}_1} I(u) \ge c_0 > 0 = \inf_{u\in {\cal W}_2} I(u).
+ $$
+ Moreover the minimum of $I(u)$ over ${\cal W}_2$ is attained by
+ $u(x)=x^{1/3}$.
+
+ \textsc{Remarks.}
+
+ [\ldots]
+
+ ii) it is interesting to note that if one uses the usual finite element
+ methods (by taking piecewise affine functions, which are in $W^{1,\infty}$)
+ one will not be able to detect the minimum of some integrals such as the one
+ in the theorem.
+\end{quote}
+In other words: minimizing the energy functional over one space
+($W^{1,\infty}$) does not give the same value as when minimizing over a larger
+space ($W^{1,1}$). Furthermore, they give a rough estimate of the value of the
+constant $c_0$, which is $c_0=\tfrac{7^23^5}{2^{18}5^5}\approx 1.61\cdot
+10^{-6}$ (although by their calculation it is obvious that this estimate is
+far too small, but the point of course is just to show that it is strictly
+larger than zero).
+
+While the theorem was not surprising, the remark stunned me at first. After
+all, we know that we can approximate functions in $W^{1,1}$ to arbitrary
+accuracy. Also, although it is true that finite element functions are in
+$W^{1,\infty}$, this statement is not really accurate: if the function itself
+is bounded pointwise by, say, a constant $C$, then its gradient is bounded by
+$2C/h$, and thus $\|u_h\|_{1,\infty} \le 2C/h$. That means that we should be
+able to lift this limit just by mesh refinement. Finite element functions are
+therefore only in $W^{1,\infty}$ if one considers them on a fixed grid, not on
+a sequence of successively finer grids. (Note, we can only lift the
+boundedness in $W^{1,1}$ in the same way by considering functions that
+oscillate at cell frequency; these, however, do not converge in any reasonable
+measure.)
+
+So it took me a while to see where the problem lies. Here it is: While we are
+able to approximate functions to arbitrary accuracies in \textit{Sobolev
+ norms}, this does not necessarily also hold with respect to the functional
+$I(u)$! After all, this functional was made to show exactly these
+pathologies.
+
+What happens in this case is actually not so difficult to understand. Let us
+look at what happens if we plug the lowest-order (piecewise linear)
+interpolant $i_hu$ of the optimal solution $u=x^{1/3}$ into the functional
+$I(u)$: on the leftmost cell, the left end of $i_hu$ is tagged to zero by the
+boundary condition, and the right end has the value $i_hu(h)=u(h)=h^{1/3}$. So
+let us only consider the contribution of this single cell to $I(u)$:
+\begin{eqnarray*}
+ \int_0^h (x-(i_hu)^3)^2 ((i_hu)')^6 dx
+ &=&
+ \int_0^h (x-(h^{1/3}x)^3)^2 ((h^{1/3}/h)')^6 dx
+ \\
+ &=&
+ h^{-4} \int_0^h (x^2-2hx^4+h^2x^6) dx
+ \\
+ &=&
+ h^{-4} (h^3/3-2h^5/5+h^9/7)
+ \\
+ &=& {\cal O}(h^{-1}).
+\end{eqnarray*}
+Ups, even the contribution of the first cell blows up under mesh refinement,
+and we have not even summed up the contributions of the other cells!
+
+It turns out, that the other cells are not really problematic (since the
+gradient is bounded there by a constant independent of $h$), but we cannot
+really avoid the trouble with the first cell: if instead of the interpolant we
+choose some other finite element function that is closer on average to
+$x^{1/3}$ than the interpolant above, then we have to increase the slope of
+this function, since we have to obey the boundary condition at the left
+end. But then we are hit by the weight $(u')^6$. This weight is simply too
+strong!
+
+Of course, in practice the minimal value of $I$ cannot increase under mesh
+refinement: if it is finite for some function on some mesh, then it must be
+smaller or equal to that value on a finer mesh, since the original function is
+still in the space spanned by the shape functions on the finer grid, as finite
+element spaces are nested. However, the computation above shows that we should
+not be surprised if the value of the functional does not converge to zero, but
+rather some finite value.
+
+There is one more conclusion to be drawn from the blow-up lesson above: we
+cannot expect the finite dimensional approximation to be close to the root
+function at the left end of the domain, for any mesh we choose! Because, if it
+would, then its energy would have to blow up. And we will see exactly this
+in the results section below.
+
+
+\section{What to do?}
+
+After this somewhat theoretical introduction, let us just once in our life
+have fun with pure mathematics, and actually see what happens in this problem
+when we run the finite element method on it. So here it goes: to find the
+minimum of $I(u)$, we have to find its stationary point. The condition for
+this reads
+\begin{equation*}
+ I'(u,\varphi)
+ =
+ \int_0^1 6 (x-u^3) (u')^5 \{ (x-u^3)\varphi' - u^2 u' \varphi\}\ dx,
+\end{equation*}
+for all test functions $\varphi$ from the same space as that from which we
+take $u$, but with zero boundary conditions. If this space allows us to
+integrate by parts, then we could associate this with a two point boundary
+value problem
+\begin{equation*}
+ -(x-u^3) u^2(u')^6
+ - \frac{d}{dx} \left\{(x-u^3)^2 (u')^5\right\} = 0,
+ \qquad\qquad u(0)=0,
+ \quad u(1)=1,
+\end{equation*}
+but for finite elements, we will want to have it in weak form anyway. Since
+the equation is still nonlinear, we want to use a Newton method. For this, we
+compute iterates $u_{k+1}=u_k+\delta u_k$, and the updates are solutions of
+\begin{equation*}
+ I''(u_k,\delta u_k,\varphi)
+ =
+ -I'(u_k, \varphi).
+\end{equation*}
+$I''$ is actually a lengthy expression, so we will not write it down here
+(you'll find it in the code where we build up the matrix). The basic idea that
+you should get here is that we formulate a Newton method in a function space,
+and will only discretize each step separately.
+
+
+\section{The program}
+
+The program does exactly this: it discretizes each Newton step, and forms the
+update. That is, it computes the matrix and right hand side vector
+\begin{equation*}
+ A_{ij} = I''(u_k, \varphi_i, \varphi_j),
+ \qquad\qquad
+ f_i = -I'(u_k, \varphi_i),
+\end{equation*}
+and solves $Ax=f$ for the update $\delta u_k=\sum_i x_i \varphi_i$. Note that
+emerging from the second derivatives of a functional, the matrix is of course
+symmetric, but it is not necessarily positive definite. In fact, it is not in
+general, but should of course be at the solution (otherwise this would be a
+saddle point instead of a minimum, or it would be an unstable minimum).
+
+Formulating the Newton method in function spaces, and only discretizing
+afterwards has consequences: we have to linearize around $u_k$ when we want to
+compute $\delta u_k$, and we have to sum up these two functions afterwards.
+However, they may be living on different grids, if we have refined the grid
+before this step, so we will have to present a way to actually get a function
+from one grid to another. The \textrm{SolutionTransfer} class will help us
+here. On the other hand, discretizing every Newton step separately has the
+advantage that we can do the initial steps, when we are still far away from
+the solution, on a coarse mesh, and only go on to more expensive computations
+when we home in on the solution.
+
+Apart from this, the program does not contain much new stuff. We will use a
+very simplistic strategy for step length control in the Newton method (always
+take full steps) and for when we refine the mesh (every third step). Realistic
+programs solving nonlinear problems will have to be more clever in this
+respect, but it suffices for the purposes of this program, and, after all,
+this is a tutorial on programming with \textrm{deal.II}, not one on writing
+clever nonlinear solvers.
+
+
+\end{document}
--- /dev/null
+<a name="Results"></a>
+<h1>Results</h1>
--- /dev/null
+# $Id$
+
+
+# For the small projects Makefile, you basically need to fill in only
+# four fields.
+#
+# The first is the name of the application. It is assumed that the
+# application name is the same as the base file name of the single C++
+# file from which the application is generated.
+target = $(basename $(shell echo step-*.cc))
+
+# The second field determines whether you want to run your program in
+# debug or optimized mode. The latter is significantly faster, but no
+# run-time checking of parameters and internal states is performed, so
+# you should set this value to `on' while you develop your program,
+# and to `off' when running production computations.
+debug-mode = off
+
+
+# As third field, we need to give the path to the top-level deal.II
+# directory. You need to adjust this to your needs. Since this path is
+# probably the most often needed one in the Makefile internals, it is
+# designated by a single-character variable, since that can be
+# reference using $D only, i.e. without the parentheses that are
+# required for most other parameters, as e.g. in $(target).
+D = ../../
+
+
+# The last field specifies the names of data and other files that
+# shall be deleted when calling `make clean'. Object and backup files,
+# executables and the like are removed anyway. Here, we give a list of
+# files in the various output formats that deal.II supports.
+clean-up-files = *gmv *gnuplot *gpl *eps *pov
+
+
+
+
+#
+#
+# Usually, you will not need to change something beyond this point.
+#
+#
+# The next statement tell the `make' program where to find the
+# deal.II top level directory and to include the file with the global
+# settings
+include $D/common/Make.global_options
+
+
+# Since the whole project consists of only one file, we need not
+# consider difficult dependencies. We only have to declare the
+# libraries which we want to link to the object file, and there need
+# to be two sets of libraries: one for the debug mode version of the
+# application and one for the optimized mode. Here we have selected
+# the versions for 2d. Note that the order in which the libraries are
+# given here is important and that your applications won't link
+# properly if they are given in another order.
+#
+# You may need to augment the lists of libraries when compiling your
+# program for other dimensions, or when using third party libraries
+libs.g = $(lib-deal2-1d.g) \
+ $(lib-lac.g) \
+ $(lib-base.g)
+libs.o = $(lib-deal2-1d.o) \
+ $(lib-lac.o) \
+ $(lib-base.o)
+
+
+# We now use the variable defined above which switch between debug and
+# optimized mode to select the set of libraries to link with. Included
+# in the list of libraries is the name of the object file which we
+# will produce from the single C++ file. Note that by default we use
+# the extension .go for object files compiled in debug mode and .o for
+# object files in optimized mode.
+ifeq ($(debug-mode),on)
+ libraries = $(target).g.o $(libs.g)
+else
+ libraries = $(target).o $(libs.o)
+endif
+
+
+# Now comes the first production rule: how to link the single object
+# file produced from the single C++ file into the executable. Since
+# this is the first rule in the Makefile, it is the one `make' selects
+# if you call it without arguments.
+$(target) : $(libraries)
+ @echo ============================ Linking $@
+ @$(CXX) -o $@ $^ $(LIBS) $(LDFLAGS)
+
+
+# To make running the application somewhat independent of the actual
+# program name, we usually declare a rule `run' which simply runs the
+# program. You can then run it by typing `make run'. This is also
+# useful if you want to call the executable with arguments which do
+# not change frequently. You may then want to add them to the
+# following rule:
+run: $(target)
+ @echo ============================ Running $<
+ @./$(target)
+
+
+# As a last rule to the `make' program, we define what to do when
+# cleaning up a directory. This usually involves deleting object files
+# and other automatically created files such as the executable itself,
+# backup files, and data files. Since the latter are not usually quite
+# diverse, you needed to declare them at the top of this file.
+clean:
+ -rm -f *.o *.go *~ Makefile.dep $(target) $(clean-up-files)
+
+
+# Since we have not yet stated how to make an object file from a C++
+# file, we should do so now. Since the many flags passed to the
+# compiler are usually not of much interest, we suppress the actual
+# command line using the `at' sign in the first column of the rules
+# and write the string indicating what we do instead.
+./%.g.o :
+ @echo ==============debug========= $(<F)
+ @$(CXX) $(CXXFLAGS.g) -c $< -o $@
+./%.o :
+ @echo ==============optimized===== $(<F)
+ @$(CXX) $(CXXFLAGS.o) -c $< -o $@
+
+
+# The following statement tells make that the rules `run' and `clean'
+# are not expected to produce files of the same name as Makefile rules
+# usually do.
+.PHONY: run clean
+
+
+# Finally there is a rule which you normally need not care much about:
+# since the executable depends on some include files from the library,
+# besides the C++ application file of course, it is necessary to
+# re-generate the executable when one of the files it depends on has
+# changed. The following rule to created a dependency file
+# `Makefile.dep', which `make' uses to determine when to regenerate
+# the executable. This file is automagically remade whenever needed,
+# i.e. whenever one of the cc-/h-files changed. Make detects whether
+# to remake this file upon inclusion at the bottom of this file.
+Makefile.dep: $(target).cc Makefile \
+ $(shell echo $(include-path-base)/base/*.h \
+ $(include-path-lac)/lac/*.h \
+ $(include-path-deal2)/*/*.h)
+ @echo ============================ Remaking $@
+ @$(PERL) $D/common/scripts/make_dependencies.pl $(INCLUDE) -B. $(target).cc \
+ > Makefile.dep
+
+# To make the dependencies known to `make', we finally have to include
+# them:
+include Makefile.dep
+
+
--- /dev/null
+/* $Id$ */
+/* Author: Wolfgang Bangerth, University of Heidelberg, 2002 */
+
+/* $Id$ */
+/* Version: $Name$ */
+/* */
+/* Copyright (C) 2002 by the deal.II authors */
+/* */
+/* This file is subject to QPL and may not be distributed */
+/* without copyright and license information. Please refer */
+/* to the file deal.II/doc/license.html for the text and */
+/* further information on this license. */
+
+#include <base/quadrature_lib.h>
+#include <base/function.h>
+#include <base/logstream.h>
+#include <lac/vector.h>
+#include <lac/full_matrix.h>
+#include <lac/sparse_matrix.h>
+ //
+#include <lac/solver_gmres.h>
+#include <lac/solver_cg.h>
+#include <lac/vector_memory.h>
+#include <lac/precondition.h>
+#include <grid/tria.h>
+#include <dofs/dof_handler.h>
+#include <grid/grid_generator.h>
+#include <grid/tria_accessor.h>
+#include <grid/tria_iterator.h>
+#include <grid/tria_boundary_lib.h>
+#include <dofs/dof_accessor.h>
+#include <dofs/dof_tools.h>
+#include <fe/fe_values.h>
+#include <numerics/vectors.h>
+#include <numerics/matrices.h>
+#include <numerics/data_out.h>
+#include <numerics/solution_transfer.h>
+#include <fe/fe_q.h>
+#include <grid/grid_out.h>
+
+#include <grid/grid_refinement.h>
+
+#include <dofs/dof_constraints.h>
+
+#include <numerics/error_estimator.h>
+
+#include <fstream>
+#include <iostream>
+
+#ifdef HAVE_STD_STRINGSTREAM
+# include <sstream>
+#else
+# include <strstream>
+#endif
+
+
+
+template <int dim>
+class MinimizationProblem
+{
+ public:
+ MinimizationProblem ();
+ ~MinimizationProblem ();
+ void run ();
+
+ private:
+ void setup_system ();
+ void assemble_newton_step (bool p);
+ void solve ();
+ void initialize ();
+ void refine_grid ();
+ double energy () const;
+ void output_results (const unsigned int cycle) const;
+
+ Triangulation<dim> triangulation;
+
+ FE_Q<dim> fe;
+ DoFHandler<dim> dof_handler;
+
+ ConstraintMatrix hanging_node_constraints;
+
+ SparsityPattern sparsity_pattern;
+ SparseMatrix<double> newton_matrix;
+
+ Vector<double> present_solution;
+ Vector<double> newton_residual;
+ Vector<double> newton_update;
+};
+
+
+
+class InitializationValues : public Function<1>
+{
+ public:
+ InitializationValues () : Function<1>() {};
+
+ virtual double value (const Point<1> &p,
+ const unsigned int component = 0) const;
+};
+
+
+
+double InitializationValues::value (const Point<1> &p,
+ const unsigned int) const
+{
+ return std::pow(p(0), 1./3.);
+};
+
+
+
+template <int dim>
+MinimizationProblem<dim>::MinimizationProblem () :
+ fe (1),
+ dof_handler (triangulation)
+{};
+
+
+template <int dim>
+MinimizationProblem<dim>::~MinimizationProblem ()
+{
+ dof_handler.clear ();
+};
+
+
+
+template <int dim>
+void MinimizationProblem<dim>::setup_system ()
+{
+ hanging_node_constraints.clear ();
+ DoFTools::make_hanging_node_constraints (dof_handler,
+ hanging_node_constraints);
+ hanging_node_constraints.close ();
+
+ sparsity_pattern.reinit (dof_handler.n_dofs(),
+ dof_handler.n_dofs(),
+ dof_handler.max_couplings_between_dofs());
+ DoFTools::make_sparsity_pattern (dof_handler, sparsity_pattern);
+
+ hanging_node_constraints.condense (sparsity_pattern);
+
+ sparsity_pattern.compress();
+};
+
+
+template <int dim>
+double gradient_power (const Tensor<1,dim> &v,
+ const unsigned int n)
+{
+ Assert ((n/2)*2 == n, ExcMessage ("Value of 'n' must be even"));
+ double p = 1;
+ for (unsigned int k=0; k<n; k+=2)
+ p += (v*v);
+ return p;
+};
+
+
+template <int dim>
+void MinimizationProblem<dim>::assemble_newton_step (const bool p)
+{
+ if (p)
+ newton_matrix.reinit (sparsity_pattern);
+ newton_update.reinit (dof_handler.n_dofs());
+ newton_residual.reinit (dof_handler.n_dofs());
+
+ QGauss3<dim> quadrature_formula;
+
+ FEValues<dim> fe_values (fe, quadrature_formula,
+ UpdateFlags(update_values |
+ update_gradients |
+ update_q_points |
+ update_JxW_values));
+
+ const unsigned int dofs_per_cell = fe.dofs_per_cell;
+ const unsigned int n_q_points = quadrature_formula.n_quadrature_points;
+
+ FullMatrix<double> cell_matrix (dofs_per_cell, dofs_per_cell);
+ Vector<double> cell_rhs (dofs_per_cell);
+
+ std::vector<unsigned int> local_dof_indices (dofs_per_cell);
+
+ std::vector<double> local_solution_values (n_q_points);
+ std::vector<Tensor<1,dim> > local_solution_grads (n_q_points);
+
+ typename DoFHandler<dim>::active_cell_iterator
+ cell = dof_handler.begin_active(),
+ endc = dof_handler.end();
+ for (; cell!=endc; ++cell)
+ {
+ cell_matrix.clear ();
+ cell_rhs.clear ();
+
+ fe_values.reinit (cell);
+
+ fe_values.get_function_values (present_solution,
+ local_solution_values);
+ fe_values.get_function_grads (present_solution,
+ local_solution_grads);
+
+ for (unsigned int q_point=0; q_point<n_q_points; ++q_point)
+ {
+ const double u = local_solution_values[q_point],
+ x = fe_values.quadrature_point(q_point)(0);
+ const double x_minus_u3 = (x-std::pow(u,3));
+
+ const Tensor<1,dim> u_prime = local_solution_grads[q_point];
+
+ for (unsigned int i=0; i<dofs_per_cell; ++i)
+ {
+ if (p)
+ for (unsigned int j=0; j<dofs_per_cell; ++j)
+ {
+ cell_matrix(i,j)
+ += (30.* x_minus_u3 * x_minus_u3 *
+ gradient_power (u_prime, 4) *
+ (fe_values.shape_grad(i,q_point) *
+ fe_values.shape_grad(j,q_point)) *
+ fe_values.JxW(q_point));
+
+ cell_matrix(i,j)
+ += (-36. * x_minus_u3 * u * u
+ *
+ gradient_power(u_prime, 4)
+ *
+ (fe_values.shape_value(i,q_point) *
+ (fe_values.shape_grad(j,q_point) *
+ u_prime)
+ +
+ fe_values.shape_value(j,q_point) *
+ (fe_values.shape_grad(i,q_point) *
+ u_prime)
+ )*
+ fe_values.JxW(q_point));
+
+ cell_matrix(i,j)
+ += ((30.* std::pow(u,4.) - 12*x*u)
+ *
+ gradient_power(u_prime, 6)
+ *
+ (fe_values.shape_value(i,q_point) *
+ fe_values.shape_value(j,q_point)) *
+ fe_values.JxW(q_point));
+ };
+
+ cell_rhs(i) += -((6. * x_minus_u3 *
+ gradient_power (local_solution_grads[q_point],
+ 4) *
+ fe_values.shape_value(i,q_point)
+ *
+ (x_minus_u3 *
+ (u_prime *
+ fe_values.shape_grad(i,q_point))
+ -
+ (u_prime*u_prime) * u * u *
+ fe_values.shape_value(i,q_point))
+ )
+ *
+ fe_values.JxW(q_point));
+ };
+ };
+
+
+ cell->get_dof_indices (local_dof_indices);
+ for (unsigned int i=0; i<dofs_per_cell; ++i)
+ {
+ if (p)
+ for (unsigned int j=0; j<dofs_per_cell; ++j)
+ newton_matrix.add (local_dof_indices[i],
+ local_dof_indices[j],
+ cell_matrix(i,j));
+
+ newton_residual(local_dof_indices[i]) += cell_rhs(i);
+ };
+ };
+
+ hanging_node_constraints.condense (newton_matrix);
+ hanging_node_constraints.condense (newton_residual);
+
+ std::map<unsigned int,double> boundary_values;
+ VectorTools::interpolate_boundary_values (dof_handler,
+ 0,
+ ZeroFunction<dim>(),
+ boundary_values);
+ if (dim == 1)
+ VectorTools::interpolate_boundary_values (dof_handler,
+ 1,
+ ZeroFunction<dim>(),
+ boundary_values);
+ MatrixTools::apply_boundary_values (boundary_values,
+ newton_matrix,
+ newton_update,
+ newton_residual);
+// std::cout << " res=" << newton_residual.l2_norm() << std::endl;
+};
+
+
+
+template <int dim>
+void MinimizationProblem<dim>::solve ()
+{
+ SolverControl solver_control (1000,
+ 1e-3*newton_residual.l2_norm());
+ PrimitiveVectorMemory<> vector_memory;
+ SolverGMRES<> gmres (solver_control, vector_memory);
+
+ PreconditionJacobi<> preconditioner;
+ preconditioner.initialize(newton_matrix, 1.2);
+
+// std::cout << "------------------------------" << std::endl;
+// newton_matrix.print_formatted(cout);
+// std::cout << std::endl;
+// for (unsigned int i=0; i<newton_residual.size(); ++i)
+// std::cout << newton_residual(i) << " ";
+// std::cout << std::endl;
+
+
+ gmres.solve (newton_matrix, newton_update, newton_residual,
+ preconditioner);
+
+ present_solution += newton_update;
+ hanging_node_constraints.distribute (present_solution);
+};
+
+
+
+template <int dim>
+void MinimizationProblem<dim>::initialize ()
+{
+ dof_handler.distribute_dofs (fe);
+ present_solution.reinit (dof_handler.n_dofs());
+ VectorTools::interpolate (dof_handler,
+ InitializationValues(),
+ present_solution);
+};
+
+
+
+template <int dim>
+void MinimizationProblem<dim>::refine_grid ()
+{
+ Vector<float> estimated_error_per_cell (triangulation.n_active_cells());
+
+ typename FunctionMap<dim>::type neumann_boundary;
+
+ KellyErrorEstimator<dim>::estimate (dof_handler,
+ Quadrature<dim-1>(1),
+ neumann_boundary,
+ present_solution,
+ estimated_error_per_cell);
+
+ GridRefinement::refine_and_coarsen_fixed_number (triangulation,
+ estimated_error_per_cell,
+ 0.3, 0.03);
+
+ SolutionTransfer<dim,double> solution_transfer(dof_handler);
+ triangulation.prepare_coarsening_and_refinement();
+ solution_transfer.prepare_for_coarsening_and_refinement (present_solution);
+ triangulation.execute_coarsening_and_refinement ();
+ dof_handler.distribute_dofs (fe);
+
+ Vector<double> tmp (dof_handler.n_dofs());
+ solution_transfer.interpolate (present_solution, tmp);
+ present_solution = tmp;
+};
+
+
+
+template <int dim>
+double MinimizationProblem<dim>::energy () const
+{
+ double energy = 0.;
+
+ QGauss3<dim> quadrature_formula;
+ FEValues<dim> fe_values (fe, quadrature_formula,
+ UpdateFlags(update_values |
+ update_gradients |
+ update_q_points |
+ update_JxW_values));
+
+ const unsigned int n_q_points = quadrature_formula.n_quadrature_points;
+
+ std::vector<double> local_solution_values (n_q_points);
+ std::vector<Tensor<1,dim> > local_solution_grads (n_q_points);
+
+ typename DoFHandler<dim>::active_cell_iterator
+ cell = dof_handler.begin_active(),
+ endc = dof_handler.end();
+ for (; cell!=endc; ++cell)
+ {
+ fe_values.reinit (cell);
+ fe_values.get_function_values (present_solution,
+ local_solution_values);
+ fe_values.get_function_grads (present_solution,
+ local_solution_grads);
+
+ for (unsigned int q_point=0; q_point<n_q_points; ++q_point)
+ energy += (std::pow (fe_values.quadrature_point(q_point)(0)
+ -
+ std::pow (local_solution_values[q_point],
+ 3),
+ 2) *
+ gradient_power (local_solution_grads[q_point],
+ 6) *
+ fe_values.JxW (q_point));
+ };
+
+ return energy;
+};
+
+
+
+template <int dim>
+void MinimizationProblem<dim>::output_results (const unsigned int cycle) const
+{
+ DataOut<dim> data_out;
+ data_out.attach_dof_handler (dof_handler);
+ data_out.add_data_vector (present_solution, "solution");
+ data_out.build_patches ();
+
+#ifdef HAVE_STD_STRINGSTREAM
+ std::ostringstream filename;
+#else
+ std::ostrstream filename;
+#endif
+ filename << "solution-"
+ << cycle
+ << ".gnuplot"
+ << std::ends;
+#ifdef HAVE_STD_STRINGSTREAM
+ std::ofstream out (filename.str().c_str());
+#else
+ std::ofstream out (filename.str());
+#endif
+
+ data_out.write_gnuplot (out);
+};
+
+
+
+template <int dim>
+void MinimizationProblem<dim>::run ()
+{
+ GridGenerator::hyper_cube (triangulation, 0., 1.);
+ triangulation.refine_global (1);
+ initialize ();
+
+ for (unsigned int refinement_cycle=0; refinement_cycle<50;
+ ++refinement_cycle)
+ {
+ std::cout << "Cycle " << refinement_cycle << ':' << std::endl;
+
+ std::cout << " Number of active cells: "
+ << triangulation.n_active_cells()
+ << std::endl;
+
+ setup_system ();
+
+ unsigned int iteration=0;
+ for (; iteration<1000; ++iteration)
+ {
+ assemble_newton_step (true);
+ solve ();
+
+ if (newton_residual.l2_norm() < 1.e-12)
+ break;
+ };
+ output_results (refinement_cycle);
+
+ std::cout << " Iterations : "
+ << iteration
+ << std::endl;
+ std::cout << " Energy : "
+ << energy ()
+ << std::endl;
+
+ refine_grid ();
+ };
+};
+
+
+int main ()
+{
+ try
+ {
+ deallog.depth_console (0);
+
+ MinimizationProblem<1> minimization_problem_1d;
+ minimization_problem_1d.run ();
+ }
+ catch (std::exception &exc)
+ {
+ std::cerr << std::endl << std::endl
+ << "----------------------------------------------------"
+ << std::endl;
+ std::cerr << "Exception on processing: " << std::endl
+ << exc.what() << std::endl
+ << "Aborting!" << std::endl
+ << "----------------------------------------------------"
+ << std::endl;
+ return 1;
+ }
+ catch (...)
+ {
+ std::cerr << std::endl << std::endl
+ << "----------------------------------------------------"
+ << std::endl;
+ std::cerr << "Unknown exception!" << std::endl
+ << "Aborting!" << std::endl
+ << "----------------------------------------------------"
+ << std::endl;
+ return 1;
+ };
+ return 0;
+};