23536 Commits

Author SHA1 Message Date
91d933c221 perf c2c: Add metrics "RMT Load Hit"
The metrics "LLC Ld Miss" and "Load Dram" overlap with each other for
accouting items:

  "LLC Ld Miss" = "lcl_dram" + "rmt_dram" + "rmt_hit" + "rmt_hitm"
  "Load Dram"   = "lcl_dram" + "rmt_dram"

Furthermore, the metrics "LLC Ld Miss" is not directive to show
statistics due to it contains summary value and cannot give out
breakdown details.

For this reason, add a new metrics "RMT Load Hit" which is used to
present the remote cache hit; it contains two items:

  "RMT Load Hit" = remote hit ("rmt_hit") + remote hitm ("rmt_hitm")

As result, the metrics "LLC Ld Miss" is perfectly divided into two
metrics "RMT Load Hit" and "Load Dram".  It's not necessary to keep
metrics "LLC Ld Miss", so remove it.

Before:

  #        ----------- Cacheline ----------      Tot  ------- Load Hitm -------    Total    Total    Total  ---- Stores ----  ----- Core Load Hit -----  - LLC Load Hit --      LLC  --- Load Dram ----
  # Index             Address  Node  PA cnt     Hitm    Total  LclHitm  RmtHitm  records    Loads   Stores    L1Hit   L1Miss       FB       L1       L2    LclHit  LclHitm  Ld Miss       Lcl       Rmt
  # .....  ..................  ....  ......  .......  .......  .......  .......  .......  .......  .......  .......  .......  .......  .......  .......  ........  .......  .......  ........  ........
  #
        0      0x55f07d580100     0    1499   85.89%      481      481        0     7243     3879     3364     2599      765      548     2615       66       169      481        0         0         0
        1      0x55f07d580080     0       1   13.93%       78       78        0      664      664        0        0        0      187      361       27        11       78        0         0         0
        2      0x55f07d5800c0     0       1    0.18%        1        1        0      405      405        0        0        0      131        0       10       263        1        0         0         0

After:

  #        ----------- Cacheline ----------      Tot  ------- Load Hitm -------    Total    Total    Total  ---- Stores ----  ----- Core Load Hit -----  - LLC Load Hit --  - RMT Load Hit --  --- Load Dram ----
  # Index             Address  Node  PA cnt     Hitm    Total  LclHitm  RmtHitm  records    Loads   Stores    L1Hit   L1Miss       FB       L1       L2    LclHit  LclHitm    RmtHit  RmtHitm       Lcl       Rmt
  # .....  ..................  ....  ......  .......  .......  .......  .......  .......  .......  .......  .......  .......  .......  .......  .......  ........  .......  ........  .......  ........  ........
  #
        0      0x55f07d580100     0    1499   85.89%      481      481        0     7243     3879     3364     2599      765      548     2615       66       169      481         0        0         0         0
        1      0x55f07d580080     0       1   13.93%       78       78        0      664      664        0        0        0      187      361       27        11       78         0        0         0         0
        2      0x55f07d5800c0     0       1    0.18%        1        1        0      405      405        0        0        0      131        0       10       263        1         0        0         0         0

Signed-off-by: Leo Yan <leo.yan@linaro.org>
Tested-by: Joe Mario <jmario@redhat.com>
Acked-by: Jiri Olsa <jolsa@redhat.com>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Link: https://lore.kernel.org/r/20201014050921.5591-9-leo.yan@linaro.org
2020-10-15 09:34:51 -03:00
77c158698c perf c2c: Correct LLC load hit metrics
"rmt_hit" is accounted into two metrics: one is accounted into the
metrics "LLC Ld Miss" (see the function llc_miss() for calculation
"llcmiss"); and it's accounted into metrics "LLC Load Hit".  Thus,
for the literal meaning, it is contradictory that "rmt_hit" is
accounted for both "LLC Ld Miss" (LLC miss) and "LLC Load Hit"
(LLC hit).

Thus this is easily to introduce confusion: "LLC Load Hit" gives
impression that all items belong to it are LLC hit; in fact "rmt_hit"
is LLC miss and remote cache hit.

To give out clear semantics for metric "LLC Load Hit", "rmt_hit" is
moved out from it and changes "LLC Load Hit" to contain two items:

  LLC Load Hit = LLC's hit ("ld_llchit") + LLC's hitm ("lcl_hitm")

For output alignment, adjusts the header for "LLC Load Hit".

Signed-off-by: Leo Yan <leo.yan@linaro.org>
Tested-by: Joe Mario <jmario@redhat.com>
Acked-by: Jiri Olsa <jolsa@redhat.com>
Link: https://lore.kernel.org/r/20201014050921.5591-8-leo.yan@linaro.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-10-15 09:34:51 -03:00
ed626a3e52 perf c2c: Change header for LLC local hit
Replace the header string "Lcl" with "LclHit", which is more explicit
to express the event type is LLC local hit.

Signed-off-by: Leo Yan <leo.yan@linaro.org>
Tested-by: Joe Mario <jmario@redhat.com>
Acked-by: Jiri Olsa <jolsa@redhat.com>
Link: https://lore.kernel.org/r/20201014050921.5591-7-leo.yan@linaro.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-10-15 09:34:49 -03:00
0fbe2fe965 perf c2c: Use more explicit headers for HITM
Local and remote HITM use the headers 'Lcl' and 'Rmt' respectively,
suppose if we want to extend the tool to display these two dimensions
under any one metrics, users cannot understand the semantics if only
based on the header string 'Lcl' or 'Rmt'.

To explicit express the meaning for HITM items, this patch changes the
headers string as "LclHitm" and "RmtHitm", the strings are more readable
and this allows to extend metrics for using HITM items.

Signed-off-by: Leo Yan <leo.yan@linaro.org>
Tested-by: Joe Mario <jmario@redhat.com>
Acked-by: Jiri Olsa <jolsa@redhat.com>
Link: https://lore.kernel.org/r/20201014050921.5591-6-leo.yan@linaro.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-10-15 09:34:47 -03:00
fdd32d7e8e perf c2c: Change header from "LLC Load Hitm" to "Load Hitm"
The metrics "LLC Load Hitm" contains two items: one is "local Hitm" and
another is "remote Hitm".

"local Hitm" means: L3 HIT and was serviced by another processor core
with a cross core snoop where modified copies were found; it's no doubt
that "local Hitm" belongs to LLC access.

But for "remote Hitm", based on the code in util/mem-events, it's the
event for remote cache HIT and was serviced by another processor core
with modified copies.  Thus the remote Hitm is a remote cache's hit and
actually it's LLC load miss.

Now the display format gives users the impression that "local Hitm" and
"remote Hitm" both belong to the LLC load, but this is not the fact as
described.

This patch changes the header from "LLC Load Hitm" to "Load Hitm", this
can avoid the give the wrong impression that all Hitm belong to LLC.

Signed-off-by: Leo Yan <leo.yan@linaro.org>
Tested-by: Joe Mario <jmario@redhat.com>
Acked-by: Jiri Olsa <jolsa@redhat.com>
Link: https://lore.kernel.org/r/20201014050921.5591-5-leo.yan@linaro.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-10-15 09:34:45 -03:00
6d662d730d perf c2c: Organize metrics based on memory hierarchy
The metrics are not organized based on memory hierarchy, e.g. the tool
doesn't organize the metrics order based on memory nodes from the close
node (e.g. L1/L2 cache) to far node (e.g. L3 cache and DRAM).

To output metrics with more friendly form, this patch refines the
metrics order based on memory hierarchy:

  "Core Load Hit" => "LLC Load Hit" => "LLC Ld Miss" => "Load Dram"

Signed-off-by: Leo Yan <leo.yan@linaro.org>
Tested-by: Joe Mario <jmario@redhat.com>
Acked-by: Jiri Olsa <jolsa@redhat.com>
Link: https://lore.kernel.org/r/20201014050921.5591-4-leo.yan@linaro.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-10-15 09:34:41 -03:00
4f28641bde perf c2c: Display "Total Stores" as a standalone metrics
The total stores is displayed under the metrics "Store Reference", to
output the same format with total records and all loads, extract the
total stores number as a standalone metrics "Total Stores".

After this patch, the tool shows the summary numbers ("Total records",
"Total loads", "Total Stores") in the unified form.

Before:

  #        ----------- Cacheline ----------      Tot  ----- LLC Load Hitm -----    Total    Total  ---- Store Reference ----  --- Load Dram ----      LLC  ----- Core Load Hit -----  -- LLC Load Hit --
  # Index             Address  Node  PA cnt     Hitm    Total      Lcl      Rmt  records    Loads    Total    L1Hit   L1Miss       Lcl       Rmt  Ld Miss       FB       L1       L2       Llc       Rmt
  # .....  ..................  ....  ......  .......  .......  .......  .......  .......  .......  .......  .......  .......  ........  ........  .......  .......  .......  .......  ........  ........
  #
        0      0x55f07d580100     0    1499   85.89%      481      481        0     7243     3879     3364     2599      765         0         0        0      548     2615       66       169         0
        1      0x55f07d580080     0       1   13.93%       78       78        0      664      664        0        0        0         0         0        0      187      361       27        11         0
        2      0x55f07d5800c0     0       1    0.18%        1        1        0      405      405        0        0        0         0         0        0      131        0       10       263         0

After:

  #        ----------- Cacheline ----------      Tot  ----- LLC Load Hitm -----    Total    Total    Total  ---- Stores ----  --- Load Dram ----      LLC  ----- Core Load Hit -----  -- LLC Load Hit --
  # Index             Address  Node  PA cnt     Hitm    Total      Lcl      Rmt  records    Loads   Stores    L1Hit   L1Miss       Lcl       Rmt  Ld Miss       FB       L1       L2       Llc       Rmt
  # .....  ..................  ....  ......  .......  .......  .......  .......  .......  .......  .......  .......  .......  ........  ........  .......  .......  .......  .......  ........  ........
  #
        0      0x55f07d580100     0    1499   85.89%      481      481        0     7243     3879     3364     2599      765         0         0        0      548     2615       66       169         0
        1      0x55f07d580080     0       1   13.93%       78       78        0      664      664        0        0        0         0         0        0      187      361       27        11         0
        2      0x55f07d5800c0     0       1    0.18%        1        1        0      405      405        0        0        0         0         0        0      131        0       10       263         0

Signed-off-by: Leo Yan <leo.yan@linaro.org>
Tested-by: Joe Mario <jmario@redhat.com>
Acked-by: Jiri Olsa <jolsa@redhat.com>
Link: https://lore.kernel.org/r/20201014050921.5591-3-leo.yan@linaro.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-10-15 09:34:36 -03:00
b596e979c8 perf c2c: Display the total numbers continuously
To view the statistics with "breakdown" mode, it's good to show the
summary numbers for the total records, all stores and all loads, then
the sequential conlumns can be used to break into more detailed items.

To achieve this purpose, this patch displays the summary numbers for
records/stores/loads continuously and places them before breakdown
items, this can allow uses to easily read the summarized statistics.

Signed-off-by: Leo Yan <leo.yan@linaro.org>
Tested-by: Joe Mario <jmario@redhat.com>
Acked-by: Jiri Olsa <jolsa@redhat.com>
Link: https://lore.kernel.org/r/20201014050921.5591-2-leo.yan@linaro.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-10-15 09:34:33 -03:00
4a770b413f parisc: Add MAP_UNINITIALIZED define
We will not allow unitialized anon mmaps, but we need this define
to prevent build errors, e.g. the debian foot package.

Suggested-by: John David Anglin <dave.anglin@bell.net>
Signed-off-by: Helge Deller <deller@gmx.de>
2020-10-15 08:10:39 +02:00
1a01727676 selftests: Add VRF route leaking tests
The objective of the tests is to check that ICMP errors generated while
crossing between VRFs are properly routed back to the source host.

The first ttl test sends a ping with a ttl of 1 from h1 to h2 and parses the
output of the command to check that a ttl expired error is received.

The second ttl test runs traceroute from h1 to h2 and parses the output to
check for a hop on r1.

The mtu test sends a ping with a payload of 1450 from h1 to h2, through
r1 which has an interface with a mtu of 1400 and parses the output of the
command to check that a fragmentation needed error is received.

[ The IPv6 MTU test still fails with the symmetric routing setup. It
  appears to be caused by source address selection picking ::1.  Fixing
  this is beyond the scope of this series. ]

Signed-off-by: Michael Jeanson <mjeanson@efficios.com>
Reviewed-by: David Ahern <dsahern@gmail.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-14 17:14:28 -07:00
4da9af0014 threads-v5.10
-----BEGIN PGP SIGNATURE-----
 
 iHUEABYKAB0WIQRAhzRXHqcMeLMyaSiRxhvAZXjcogUCX4a4sAAKCRCRxhvAZXjc
 ohdRAP9fclgrRkTl3o4cgaK0PUMt8BZ5QCg/SPQrVT58AQlfSwEAsNtWAeo6U2z1
 FLGuCoPBEW1Zghkj1lMbIhj5zyVaEQ8=
 =Z7Q0
 -----END PGP SIGNATURE-----

Merge tag 'threads-v5.10' of git://git.kernel.org/pub/scm/linux/kernel/git/brauner/linux

Pull pidfd updates from Christian Brauner:
 "This introduces a new extension to the pidfd_open() syscall. Users can
  now raise the new PIDFD_NONBLOCK flag to support non-blocking pidfd
  file descriptors. This has been requested for uses in async process
  management libraries such as async-pidfd in Rust.

  Ever since the introduction of pidfds and more advanced async io
  various programming languages such as Rust have grown support for
  async event libraries. These libraries are created to help build
  epoll-based event loops around file descriptors. A common pattern is
  to automatically make all file descriptors they manage to O_NONBLOCK.

  For such libraries the EAGAIN error code is treated specially. When a
  function is called that returns EAGAIN the function isn't called again
  until the event loop indicates the the file descriptor is ready.
  Supporting EAGAIN when waiting on pidfds makes such libraries just
  work with little effort.

  This introduces a new flag PIDFD_NONBLOCK that is equivalent to
  O_NONBLOCK. This follows the same patterns we have for other (anon
  inode) file descriptors such as EFD_NONBLOCK, IN_NONBLOCK,
  SFD_NONBLOCK, TFD_NONBLOCK and the same for close-on-exec flags.

  Passing a non-blocking pidfd to waitid() currently has no effect, i.e.
  is not supported. There are users which would like to use waitid() on
  pidfds that are O_NONBLOCK and mix it with pidfds that are blocking
  and both pass them to waitid().

  The expected behavior is to have waitid() return -EAGAIN for
  non-blocking pidfds and to block for blocking pidfds without needing
  to perform any additional checks for flags set on the pidfd before
  passing it to waitid(). Non-blocking pidfds will return EAGAIN from
  waitid() when no child process is ready yet. Returning -EAGAIN for
  non-blocking pidfds makes it easier for event loops that handle EAGAIN
  specially.

  It also makes the API more consistent and uniform. In essence,
  waitid() is treated like a read on a non-blocking pidfd or a recvmsg()
  on a non-blocking socket.

  With the addition of support for non-blocking pidfds we support the
  same functionality that sockets do. For sockets() recvmsg() supports
  MSG_DONTWAIT for pidfds waitid() supports WNOHANG. Both flags are
  per-call options. In contrast non-blocking pidfds and non-blocking
  sockets are a setting on an open file description affecting all
  threads in the calling process as well as other processes that hold
  file descriptors referring to the same open file description. Both
  behaviors, per call and per open file description, have genuine
  use-cases.

  The interaction with the WNOHANG flag is documented as follows:

   - If a non-blocking pidfd is passed and WNOHANG is not raised we
     simply raise the WNOHANG flag internally. When do_wait() returns
     indicating that there are eligible child processes but none have
     exited yet we set EAGAIN. If no child process exists we continue
     returning ECHILD.

   - If a non-blocking pidfd is passed and WNOHANG is raised waitid()
     will continue returning 0, i.e. it will not set EAGAIN. This ensure
     backwards compatibility with applications passing WNOHANG
     explicitly with pidfds"

* tag 'threads-v5.10' of git://git.kernel.org/pub/scm/linux/kernel/git/brauner/linux:
  tests: remove O_NONBLOCK before waiting for WSTOPPED
  tests: add waitid() tests for non-blocking pidfds
  tests: port pidfd_wait to kselftest harness
  pidfd: support PIDFD_NONBLOCK in pidfd_open()
  exit: support non-blocking pidfds
2020-10-14 14:39:20 -07:00
612e7a4c16 kernel-clone-v5.9
-----BEGIN PGP SIGNATURE-----
 
 iHUEABYKAB0WIQRAhzRXHqcMeLMyaSiRxhvAZXjcogUCXz5bNAAKCRCRxhvAZXjc
 opfjAP9R/J72yxdd2CLGNZ96hyiRX1NgFDOVUhscOvujYJf8ZwD+OoLmKMvAyFW6
 hnMhT1n9Q+aq194hyzChOLQaBTejBQ8=
 =4WCX
 -----END PGP SIGNATURE-----

Merge tag 'kernel-clone-v5.9' of git://git.kernel.org/pub/scm/linux/kernel/git/brauner/linux

Pull kernel_clone() updates from Christian Brauner:
 "During the v5.9 merge window we reworked the process creation
  codepaths across multiple architectures. After this work we were only
  left with the _do_fork() helper based on the struct kernel_clone_args
  calling convention. As was pointed out _do_fork() isn't valid
  kernelese especially for a helper that isn't just static.

  This series removes the _do_fork() helper and introduces the new
  kernel_clone() helper. The process creation cleanup didn't change the
  name to something more reasonable mainly because _do_fork() was used
  in quite a few places. So sending this as a separate series seemed the
  better strategy.

  I originally intended to send this early in the v5.9 development cycle
  after the merge window had closed but given that this was touching
  quite a few places I decided to defer this until the v5.10 merge
  window"

* tag 'kernel-clone-v5.9' of git://git.kernel.org/pub/scm/linux/kernel/git/brauner/linux:
  sched: remove _do_fork()
  tracing: switch to kernel_clone()
  kgdbts: switch to kernel_clone()
  kprobes: switch to kernel_clone()
  x86: switch to kernel_clone()
  sparc: switch to kernel_clone()
  nios2: switch to kernel_clone()
  m68k: switch to kernel_clone()
  ia64: switch to kernel_clone()
  h8300: switch to kernel_clone()
  fork: introduce kernel_clone()
2020-10-14 14:32:52 -07:00
9e51183e94 linux-kselftest-fixes-5.10-rc1
This kselftest fixes update consists of a selftests harness fix to
 flush stdout before forking to avoid parent and child printing
 duplicates messages. This is evident when test output is redirected
 to a file.
 
 The second fix is a tools/ wide change to avoid comma separated statements
 from Joe Perches. This fix spans tools/lib, tools/power/cpupower, and
 selftests.
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEPZKym/RZuOCGeA/kCwJExA0NQxwFAl+GAdIACgkQCwJExA0N
 QxyraBAAwsPISUpSA34WLuUwNCddI3ttNW2R63ZrdKSy7QlreM02zG9qyEDPwFil
 GLlgXfUE8QOI7rfiqSwr1elzS07bDdel6UcxTuhuy5KPs2+yieGGZ5lllsVY6gJu
 kF5m7setmdmHQr76HCyyGddwdpCTpz7sP3BJzmYn2iAWAQMwtZBXOEgmnf2yiskX
 SHF/f3Bvrnm+BtbzZEa+ysHpL72AlpKrGuLQAnNOCp/DomKEtRACTNxIzKFeO++r
 uelbHO/MzdaGmrCxy3J/RWz3llQVnj6aafZFaqAV7ReWi/OOYTsV48pAHRm8TTv8
 1LvVP48b7aCUc7QWu+d8SBSDfJQANI4tgcP0TI/hUboIuhU8bVkZAUF69txdFgzb
 DopwQVybQq5yEqmPg1RzvccbDojxXq72BvZyqPBo8WmKHWOQXCo9A1owudmcqtob
 WqTr1eAVAd6Rc2vcjkhnzYxQcb8A093ZfP1fyAZ5HQSH5No//4FP9pWBwMzpncKQ
 GdHmMNBns6v1muWMBj6bgT4GA1sN765Kzt1StYSC257v6gtA8+xHo/PUfojZJxy9
 bieAzuqE8n68IKKz4/Rk2JvfFBnaxDZyQUITOCrcoWJRk5apJc3T5+goq+Bep5Na
 SOFbb0JvrGLBjX3bChmLIYVa7zQkupBgwWU8NPM1tYxce+pBS30=
 =jgRu
 -----END PGP SIGNATURE-----

Merge tag 'linux-kselftest-fixes-5.10-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest

Pull kselftest updates from Shuah Khan:

 - a selftests harness fix to flush stdout before forking to avoid
   parent and child printing duplicates messages. This is evident when
   test output is redirected to a file.

 - a tools/ wide change to avoid comma separated statements from Joe
   Perches. This fix spans tools/lib, tools/power/cpupower, and
   selftests.

* tag 'linux-kselftest-fixes-5.10-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest:
  tools: Avoid comma separated statements
  selftests/harness: Flush stdout before forking
2020-10-14 14:23:51 -07:00
cf1d2b44f6 ACPI updates for 5.10-rc1
- Add support for generic initiator-only proximity domains to
    the ACPI NUMA code and the architectures using it (Jonathan
    Cameron).
 
  - Clean up some non-ACPICA code referring to debug facilities from
    ACPICA that are not actually used in there (Hanjun Guo).
 
  - Add new DPTF driver for the PCH FIVR participant (Srinivas
    Pandruvada).
 
  - Reduce overhead related to accessing GPE registers in ACPICA and
    the OS interface layer and make it possible to access GPE registers
    using logical addresses if they are memory-mapped (Rafael Wysocki).
 
  - Update the ACPICA code in the kernel to upstream revision 20200925
    including changes as follows:
    * Add predefined names from the SMBus sepcification (Bob Moore).
    * Update acpi_help UUID list (Bob Moore).
    * Return exceptions for string-to-integer conversions in iASL (Bob
      Moore).
    * Add a new "ALL <NameSeg>" debugger command (Bob Moore).
    * Add support for 64 bit risc-v compilation (Colin Ian King).
    * Do assorted cleanups (Bob Moore, Colin Ian King, Randy Dunlap).
 
  - Add new ACPI backlight whitelist entry for HP 635 Notebook (Alex
    Hung).
 
  - Move TPS68470 OpRegion driver to drivers/acpi/pmic/ and split out
    Kconfig and Makefile specific for ACPI PMIC (Andy Shevchenko).
 
  - Clean up the ACPI SoC driver for AMD SoCs (Hanjun Guo).
 
  - Add missing config_item_put() to fix refcount leak (Hanjun Guo).
 
  - Drop lefrover field from struct acpi_memory_device (Hanjun Guo).
 
  - Make the ACPI extlog driver check for RDMSR failures (Ben
    Hutchings).
 
  - Fix handling of lid state changes in the ACPI button driver when
    input device is closed (Dmitry Torokhov).
 
  - Fix several assorted build issues (Barnabás Pőcze, John Garry,
    Nathan Chancellor, Tian Tao).
 
  - Drop unused inline functions and reduce code duplication by using
    kobj_to_dev() in the NFIT parsing code (YueHaibing, Wang Qing).
 
  - Serialize tools/power/acpi Makefile (Thomas Renninger).
 -----BEGIN PGP SIGNATURE-----
 
 iQJGBAABCAAwFiEE4fcc61cGeeHD/fCwgsRv/nhiVHEFAl+F4IkSHHJqd0Byand5
 c29ja2kubmV0AAoJEILEb/54YlRx1gIQAIZrt09fquEIZhYulGZAkuYhSX2U/DZt
 poow5+TiGk36JNHlbZS19kZ3F0tJ1wA6CKSfF/bYyULxL+gYaUjdLXzv2kArTSAj
 nzDXQ2CystpySZI/sEkl4QjsMg0xuZlBhlnCfNHzJw049TgdsJHnxMkJXb8T90A+
 l2JKm2OpBkNvQGNpwd3djLg8xSDnHUmuevsWZPHDp92/fLMF9DUBk8dVuEwa0ndF
 hAUpWm+EL1tJQnhNwtfV/Akd9Ypqgk/7ROFWFHGDtHMZGnBjpyXZw68vHMX7SL6N
 Ej90GWGPHSJs/7Fsg4Hiaxxcph9WFNLPcpck5lVAMIrNHMKANjqQzCsmHavV/WTG
 STC9/qwJauA1EOjovlmlCFHctjKE/ya6Hm299WTlfBqB+Lu1L3oMR2CC+Uj0YfyG
 sv3264rJCsaSw610iwQOG807qHENopASO2q5DuKG0E9JpcaBUwn1N4qP5svvQciq
 4aA8Ma6xM/QHCO4CS0Se9C0+WSVtxWwOUichRqQmU4E6u1sXvKJxTeWo79rV7PAh
 L6BwoOxBLabEiyzpi6HPGs6DoKj/N6tOQenBh4ibdwpAwMtq7hIlBFa0bp19c2wT
 vx8F2Raa8vbQ2zZ1QEiPZnPLJUoy2DgaCtKJ6E0FTDXNs3VFlWgyhIUlIRqk5BS9
 OnAwVAUrTMkJ
 =feLU
 -----END PGP SIGNATURE-----

Merge tag 'acpi-5.10-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm

Pull ACPI updates from Rafael Wysocki:
 "These add support for generic initiator-only proximity domains to the
  ACPI NUMA code and the architectures using it, clean up some
  non-ACPICA code referring to debug facilities from ACPICA, reduce the
  overhead related to accessing GPE registers, add a new DPTF (Dynamic
  Power and Thermal Framework) participant driver, update the ACPICA
  code in the kernel to upstream revision 20200925, add a new ACPI
  backlight whitelist entry, fix a few assorted issues and clean up some
  code.

  Specifics:

   - Add support for generic initiator-only proximity domains to the
     ACPI NUMA code and the architectures using it (Jonathan Cameron)

   - Clean up some non-ACPICA code referring to debug facilities from
     ACPICA that are not actually used in there (Hanjun Guo)

   - Add new DPTF driver for the PCH FIVR participant (Srinivas
     Pandruvada)

   - Reduce overhead related to accessing GPE registers in ACPICA and
     the OS interface layer and make it possible to access GPE registers
     using logical addresses if they are memory-mapped (Rafael Wysocki)

   - Update the ACPICA code in the kernel to upstream revision 20200925
     including changes as follows:
      + Add predefined names from the SMBus sepcification (Bob Moore)
      + Update acpi_help UUID list (Bob Moore)
      + Return exceptions for string-to-integer conversions in iASL (Bob
        Moore)
      + Add a new "ALL <NameSeg>" debugger command (Bob Moore)
      + Add support for 64 bit risc-v compilation (Colin Ian King)
      + Do assorted cleanups (Bob Moore, Colin Ian King, Randy Dunlap)

   - Add new ACPI backlight whitelist entry for HP 635 Notebook (Alex
     Hung)

   - Move TPS68470 OpRegion driver to drivers/acpi/pmic/ and split out
     Kconfig and Makefile specific for ACPI PMIC (Andy Shevchenko)

   - Clean up the ACPI SoC driver for AMD SoCs (Hanjun Guo)

   - Add missing config_item_put() to fix refcount leak (Hanjun Guo)

   - Drop lefrover field from struct acpi_memory_device (Hanjun Guo)

   - Make the ACPI extlog driver check for RDMSR failures (Ben
     Hutchings)

   - Fix handling of lid state changes in the ACPI button driver when
     input device is closed (Dmitry Torokhov)

   - Fix several assorted build issues (Barnabás Pőcze, John Garry,
     Nathan Chancellor, Tian Tao)

   - Drop unused inline functions and reduce code duplication by using
     kobj_to_dev() in the NFIT parsing code (YueHaibing, Wang Qing)

   - Serialize tools/power/acpi Makefile (Thomas Renninger)"

* tag 'acpi-5.10-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (64 commits)
  ACPICA: Update version to 20200925 Version 20200925
  ACPICA: Remove unnecessary semicolon
  ACPICA: Debugger: Add a new command: "ALL <NameSeg>"
  ACPICA: iASL: Return exceptions for string-to-integer conversions
  ACPICA: acpi_help: Update UUID list
  ACPICA: Add predefined names found in the SMBus sepcification
  ACPICA: Tree-wide: fix various typos and spelling mistakes
  ACPICA: Drop the repeated word "an" in a comment
  ACPICA: Add support for 64 bit risc-v compilation
  ACPI: button: fix handling lid state changes when input device closed
  tools/power/acpi: Serialize Makefile
  ACPI: scan: Replace ACPI_DEBUG_PRINT() with pr_debug()
  ACPI: memhotplug: Remove 'state' from struct acpi_memory_device
  ACPI / extlog: Check for RDMSR failure
  ACPI: Make acpi_evaluate_dsm() prototype consistent
  docs: mm: numaperf.rst Add brief description for access class 1.
  node: Add access1 class to represent CPU to memory characteristics
  ACPI: HMAT: Fix handling of changes from ACPI 6.2 to ACPI 6.3
  ACPI: Let ACPI know we support Generic Initiator Affinity Structures
  x86: Support Generic Initiator only proximity domains
  ...
2020-10-14 11:42:04 -07:00
15cb5469fc platform-drivers-x86 for v5.10-1
Rather calm cycle for PDx86, all these have been in for-next for
 a couple of days with no bot complaints.
 
 Highlights:
 - PMC TigerLake fixes and new RocketLake support
 - Various small fixes / updates in other drivers/tools
 
 The following is an automated git shortlog grouped by driver:
 
 MAINTAINERS:
  -  update X86 PLATFORM DRIVERS entry with new kernel.org git repo
  -  Update maintainers for pmc_core driver
 
 hp-wmi:
  -  add support for thermal policy
 
 intel_pmc_core:
  -  fix: Replace dev_dbg macro with dev_info()
  -  Add Intel RocketLake (RKL) support
  -  Clean up: Remove the duplicate comments and reorganize
  -  Fix the slp_s0 counter displayed value
  -  Fix TigerLake power gating status map
 
 mlx-platform:
  -  Add capability field to platform FAN description
  -  Remove PSU EEPROM configuration
 
 platform_data/mlxreg:
  -  Extend core platform structure
  -  Update module license
 
 pmc_core:
  -  Use descriptive names for LPM registers
 
 tools/power/x86/intel-speed-select:
  -  Update version for v5.10
  -  Fix missing base-freq core IDs
 -----BEGIN PGP SIGNATURE-----
 
 iQFIBAABCAAyFiEEuvA7XScYQRpenhd+kuxHeUQDJ9wFAl+FmAsUHGhkZWdvZWRl
 QHJlZGhhdC5jb20ACgkQkuxHeUQDJ9x5awf/VWn0gcSw1i+y+Q/KPw/RCMXrEQQm
 Bqyt++IDSonvjBUmxE7QYtPvHK7lecPLXUzLkcfSoUmEZSPGoqe4F5Hj+814lj8x
 fveScf2DwUQyEfj26y4rmza1K4h7VohjJ7rQm0+t15KamrcogLiwqDpvel4v90lp
 YVvJUxDBOJxCrMs5fAziZAP7FxD42d8j664DFCPONH3EsY/vZMfOnsDRKhjahtFp
 LTtWXY5LyFf5HARKhubv/gmDddR7FzZB8/xc/G1CXpOmUBTcSgHgXH1OE/ypBXIe
 LOdchGqL2WRTq71IUKsvEXYbLSOHOMbIfBr7eCwZRKfmQLjQ8HXqI7xl9A==
 =luk4
 -----END PGP SIGNATURE-----

Merge tag 'platform-drivers-x86-v5.10-1' of git://git.kernel.org/pub/scm/linux/kernel/git/pdx86/platform-drivers-x86

Pull x86 platform driver updates from Hans de Goede:
 "Rather calm cycle for x86 platform drivers, all these have been in
  for-next for a couple of days with no bot complaints.

  Highlights:

   - PMC TigerLake fixes and new RocketLake support

   - various small fixes / updates in other drivers/tools"

* tag 'platform-drivers-x86-v5.10-1' of git://git.kernel.org/pub/scm/linux/kernel/git/pdx86/platform-drivers-x86:
  MAINTAINERS: update X86 PLATFORM DRIVERS entry with new kernel.org git repo
  platform/x86: mlx-platform: Add capability field to platform FAN description
  platform_data/mlxreg: Extend core platform structure
  platform_data/mlxreg: Update module license
  platform/x86: mlx-platform: Remove PSU EEPROM configuration
  MAINTAINERS: Update maintainers for pmc_core driver
  platform/x86: intel_pmc_core: fix: Replace dev_dbg macro with dev_info()
  platform/x86: intel_pmc_core: Add Intel RocketLake (RKL) support
  platform/x86: intel_pmc_core: Clean up: Remove the duplicate comments and reorganize
  platform/x86: intel_pmc_core: Fix the slp_s0 counter displayed value
  platform/x86: intel_pmc_core: Fix TigerLake power gating status map
  platform/x86: pmc_core: Use descriptive names for LPM registers
  tools/power/x86/intel-speed-select: Update version for v5.10
  tools/power/x86/intel-speed-select: Fix missing base-freq core IDs
  platform/x86: hp-wmi: add support for thermal policy
2020-10-14 10:43:24 -07:00
f92993851f perf bench: Use condition variables in numa.
The existing approach to synchronization between threads in the numa
benchmark is unbalanced mutexes.

This synchronization causes thread sanitizer to warn of locks being
taken twice on a thread without an unlock, as well as unlocks with no
corresponding locks.

This change replaces the synchronization with more regular condition
variables.

While this fixes one class of thread sanitizer warnings, there still
remain warnings of data races due to threads reading and writing shared
memory without any atomics.

Committer testing:

  Basic run on a non-NUMA machine.

  # perf bench numa

          # List of available benchmarks for collection 'numa':

             mem: Benchmark for NUMA workloads
             all: Run all NUMA benchmarks

  # perf bench numa all
  # Running numa/mem benchmark...

   # Running main, "perf bench numa numa-mem"
   #
   # Running test on: Linux five 5.8.12-200.fc32.x86_64 #1 SMP Mon Sep 28 12:17:31 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
   #

   # Running RAM-bw-local, "perf bench numa mem -p 1 -t 1 -P 1024 -C 0 -M 0 -s 20 -zZq --thp  1 --no-data_rand_walk"
           20.076 secs slowest (max) thread-runtime
           20.000 secs fastest (min) thread-runtime
           20.073 secs average thread-runtime
            0.190 % difference between max/avg runtime
          241.828 GB data processed, per thread
          241.828 GB data processed, total
            0.083 nsecs/byte/thread runtime
           12.045 GB/sec/thread speed
           12.045 GB/sec total speed

   # Running RAM-bw-local-NOTHP, "perf bench numa mem -p 1 -t 1 -P 1024 -C 0 -M 0 -s 20 -zZq --thp  1 --no-data_rand_walk --thp -1"
           20.045 secs slowest (max) thread-runtime
           20.000 secs fastest (min) thread-runtime
           20.014 secs average thread-runtime
            0.111 % difference between max/avg runtime
          234.304 GB data processed, per thread
          234.304 GB data processed, total
            0.086 nsecs/byte/thread runtime
           11.689 GB/sec/thread speed
           11.689 GB/sec total speed

   # Running RAM-bw-remote, "perf bench numa mem -p 1 -t 1 -P 1024 -C 0 -M 1 -s 20 -zZq --thp  1 --no-data_rand_walk"

  Test not applicable, system has only 1 nodes.

   # Running RAM-bw-local-2x, "perf bench numa mem -p 2 -t 1 -P 1024 -C 0,2 -M 0x2 -s 20 -zZq --thp  1 --no-data_rand_walk"
           20.138 secs slowest (max) thread-runtime
           20.000 secs fastest (min) thread-runtime
           20.121 secs average thread-runtime
            0.342 % difference between max/avg runtime
          135.961 GB data processed, per thread
          271.922 GB data processed, total
            0.148 nsecs/byte/thread runtime
            6.752 GB/sec/thread speed
           13.503 GB/sec total speed

   # Running RAM-bw-remote-2x, "perf bench numa mem -p 2 -t 1 -P 1024 -C 0,2 -M 1x2 -s 20 -zZq --thp  1 --no-data_rand_walk"

  Test not applicable, system has only 1 nodes.

   # Running RAM-bw-cross, "perf bench numa mem -p 2 -t 1 -P 1024 -C 0,8 -M 1,0 -s 20 -zZq --thp  1 --no-data_rand_walk"

  Test not applicable, system has only 1 nodes.

   # Running  1x3-convergence, "perf bench numa mem -p 1 -t 3 -P 512 -s 100 -zZ0qcm --thp  1"
            0.747 secs latency to NUMA-converge
            0.747 secs slowest (max) thread-runtime
            0.000 secs fastest (min) thread-runtime
            0.714 secs average thread-runtime
           50.000 % difference between max/avg runtime
            3.228 GB data processed, per thread
            9.683 GB data processed, total
            0.231 nsecs/byte/thread runtime
            4.321 GB/sec/thread speed
           12.964 GB/sec total speed

   # Running  1x4-convergence, "perf bench numa mem -p 1 -t 4 -P 512 -s 100 -zZ0qcm --thp  1"
            1.127 secs latency to NUMA-converge
            1.127 secs slowest (max) thread-runtime
            1.000 secs fastest (min) thread-runtime
            1.089 secs average thread-runtime
            5.624 % difference between max/avg runtime
            3.765 GB data processed, per thread
           15.062 GB data processed, total
            0.299 nsecs/byte/thread runtime
            3.342 GB/sec/thread speed
           13.368 GB/sec total speed

   # Running  1x6-convergence, "perf bench numa mem -p 1 -t 6 -P 1020 -s 100 -zZ0qcm --thp  1"
            1.003 secs latency to NUMA-converge
            1.003 secs slowest (max) thread-runtime
            0.000 secs fastest (min) thread-runtime
            0.889 secs average thread-runtime
           50.000 % difference between max/avg runtime
            2.141 GB data processed, per thread
           12.847 GB data processed, total
            0.469 nsecs/byte/thread runtime
            2.134 GB/sec/thread speed
           12.805 GB/sec total speed

   # Running  2x3-convergence, "perf bench numa mem -p 2 -t 3 -P 1020 -s 100 -zZ0qcm --thp  1"
            1.814 secs latency to NUMA-converge
            1.814 secs slowest (max) thread-runtime
            1.000 secs fastest (min) thread-runtime
            1.716 secs average thread-runtime
           22.440 % difference between max/avg runtime
            3.747 GB data processed, per thread
           22.483 GB data processed, total
            0.484 nsecs/byte/thread runtime
            2.065 GB/sec/thread speed
           12.393 GB/sec total speed

   # Running  3x3-convergence, "perf bench numa mem -p 3 -t 3 -P 1020 -s 100 -zZ0qcm --thp  1"
            2.065 secs latency to NUMA-converge
            2.065 secs slowest (max) thread-runtime
            1.000 secs fastest (min) thread-runtime
            1.947 secs average thread-runtime
           25.788 % difference between max/avg runtime
            2.855 GB data processed, per thread
           25.694 GB data processed, total
            0.723 nsecs/byte/thread runtime
            1.382 GB/sec/thread speed
           12.442 GB/sec total speed

   # Running  4x4-convergence, "perf bench numa mem -p 4 -t 4 -P 512 -s 100 -zZ0qcm --thp  1"
            1.912 secs latency to NUMA-converge
            1.912 secs slowest (max) thread-runtime
            1.000 secs fastest (min) thread-runtime
            1.775 secs average thread-runtime
           23.852 % difference between max/avg runtime
            1.479 GB data processed, per thread
           23.668 GB data processed, total
            1.293 nsecs/byte/thread runtime
            0.774 GB/sec/thread speed
           12.378 GB/sec total speed

   # Running  4x4-convergence-NOTHP, "perf bench numa mem -p 4 -t 4 -P 512 -s 100 -zZ0qcm --thp  1 --thp -1"
            1.783 secs latency to NUMA-converge
            1.783 secs slowest (max) thread-runtime
            1.000 secs fastest (min) thread-runtime
            1.633 secs average thread-runtime
           21.960 % difference between max/avg runtime
            1.345 GB data processed, per thread
           21.517 GB data processed, total
            1.326 nsecs/byte/thread runtime
            0.754 GB/sec/thread speed
           12.067 GB/sec total speed

   # Running  4x6-convergence, "perf bench numa mem -p 4 -t 6 -P 1020 -s 100 -zZ0qcm --thp  1"
            5.396 secs latency to NUMA-converge
            5.396 secs slowest (max) thread-runtime
            4.000 secs fastest (min) thread-runtime
            4.928 secs average thread-runtime
           12.937 % difference between max/avg runtime
            2.721 GB data processed, per thread
           65.306 GB data processed, total
            1.983 nsecs/byte/thread runtime
            0.504 GB/sec/thread speed
           12.102 GB/sec total speed

   # Running  4x8-convergence, "perf bench numa mem -p 4 -t 8 -P 512 -s 100 -zZ0qcm --thp  1"
            3.121 secs latency to NUMA-converge
            3.121 secs slowest (max) thread-runtime
            2.000 secs fastest (min) thread-runtime
            2.836 secs average thread-runtime
           17.962 % difference between max/avg runtime
            1.194 GB data processed, per thread
           38.192 GB data processed, total
            2.615 nsecs/byte/thread runtime
            0.382 GB/sec/thread speed
           12.236 GB/sec total speed

   # Running  8x4-convergence, "perf bench numa mem -p 8 -t 4 -P 512 -s 100 -zZ0qcm --thp  1"
            4.302 secs latency to NUMA-converge
            4.302 secs slowest (max) thread-runtime
            3.000 secs fastest (min) thread-runtime
            4.045 secs average thread-runtime
           15.133 % difference between max/avg runtime
            1.631 GB data processed, per thread
           52.178 GB data processed, total
            2.638 nsecs/byte/thread runtime
            0.379 GB/sec/thread speed
           12.128 GB/sec total speed

   # Running  8x4-convergence-NOTHP, "perf bench numa mem -p 8 -t 4 -P 512 -s 100 -zZ0qcm --thp  1 --thp -1"
            4.418 secs latency to NUMA-converge
            4.418 secs slowest (max) thread-runtime
            3.000 secs fastest (min) thread-runtime
            4.104 secs average thread-runtime
           16.045 % difference between max/avg runtime
            1.664 GB data processed, per thread
           53.254 GB data processed, total
            2.655 nsecs/byte/thread runtime
            0.377 GB/sec/thread speed
           12.055 GB/sec total speed

   # Running  3x1-convergence, "perf bench numa mem -p 3 -t 1 -P 512 -s 100 -zZ0qcm --thp  1"
            0.973 secs latency to NUMA-converge
            0.973 secs slowest (max) thread-runtime
            0.000 secs fastest (min) thread-runtime
            0.955 secs average thread-runtime
           50.000 % difference between max/avg runtime
            4.124 GB data processed, per thread
           12.372 GB data processed, total
            0.236 nsecs/byte/thread runtime
            4.238 GB/sec/thread speed
           12.715 GB/sec total speed

   # Running  4x1-convergence, "perf bench numa mem -p 4 -t 1 -P 512 -s 100 -zZ0qcm --thp  1"
            0.820 secs latency to NUMA-converge
            0.820 secs slowest (max) thread-runtime
            0.000 secs fastest (min) thread-runtime
            0.808 secs average thread-runtime
           50.000 % difference between max/avg runtime
            2.555 GB data processed, per thread
           10.220 GB data processed, total
            0.321 nsecs/byte/thread runtime
            3.117 GB/sec/thread speed
           12.468 GB/sec total speed

   # Running  8x1-convergence, "perf bench numa mem -p 8 -t 1 -P 512 -s 100 -zZ0qcm --thp  1"
            0.667 secs latency to NUMA-converge
            0.667 secs slowest (max) thread-runtime
            0.000 secs fastest (min) thread-runtime
            0.607 secs average thread-runtime
           50.000 % difference between max/avg runtime
            1.009 GB data processed, per thread
            8.069 GB data processed, total
            0.661 nsecs/byte/thread runtime
            1.512 GB/sec/thread speed
           12.095 GB/sec total speed

   # Running 16x1-convergence, "perf bench numa mem -p 16 -t 1 -P 256 -s 100 -zZ0qcm --thp  1"
            1.546 secs latency to NUMA-converge
            1.546 secs slowest (max) thread-runtime
            1.000 secs fastest (min) thread-runtime
            1.485 secs average thread-runtime
           17.664 % difference between max/avg runtime
            1.162 GB data processed, per thread
           18.594 GB data processed, total
            1.331 nsecs/byte/thread runtime
            0.752 GB/sec/thread speed
           12.025 GB/sec total speed

   # Running 32x1-convergence, "perf bench numa mem -p 32 -t 1 -P 128 -s 100 -zZ0qcm --thp  1"
            0.812 secs latency to NUMA-converge
            0.812 secs slowest (max) thread-runtime
            0.000 secs fastest (min) thread-runtime
            0.739 secs average thread-runtime
           50.000 % difference between max/avg runtime
            0.309 GB data processed, per thread
            9.874 GB data processed, total
            2.630 nsecs/byte/thread runtime
            0.380 GB/sec/thread speed
           12.166 GB/sec total speed

   # Running  2x1-bw-process, "perf bench numa mem -p 2 -t 1 -P 1024 -s 20 -zZ0q --thp  1"
           20.044 secs slowest (max) thread-runtime
           20.000 secs fastest (min) thread-runtime
           20.020 secs average thread-runtime
            0.109 % difference between max/avg runtime
          125.750 GB data processed, per thread
          251.501 GB data processed, total
            0.159 nsecs/byte/thread runtime
            6.274 GB/sec/thread speed
           12.548 GB/sec total speed

   # Running  3x1-bw-process, "perf bench numa mem -p 3 -t 1 -P 1024 -s 20 -zZ0q --thp  1"
           20.148 secs slowest (max) thread-runtime
           20.000 secs fastest (min) thread-runtime
           20.090 secs average thread-runtime
            0.367 % difference between max/avg runtime
           85.267 GB data processed, per thread
          255.800 GB data processed, total
            0.236 nsecs/byte/thread runtime
            4.232 GB/sec/thread speed
           12.696 GB/sec total speed

   # Running  4x1-bw-process, "perf bench numa mem -p 4 -t 1 -P 1024 -s 20 -zZ0q --thp  1"
           20.169 secs slowest (max) thread-runtime
           20.000 secs fastest (min) thread-runtime
           20.100 secs average thread-runtime
            0.419 % difference between max/avg runtime
           63.144 GB data processed, per thread
          252.576 GB data processed, total
            0.319 nsecs/byte/thread runtime
            3.131 GB/sec/thread speed
           12.523 GB/sec total speed

   # Running  8x1-bw-process, "perf bench numa mem -p 8 -t 1 -P  512 -s 20 -zZ0q --thp  1"
           20.175 secs slowest (max) thread-runtime
           20.000 secs fastest (min) thread-runtime
           20.107 secs average thread-runtime
            0.433 % difference between max/avg runtime
           31.267 GB data processed, per thread
          250.133 GB data processed, total
            0.645 nsecs/byte/thread runtime
            1.550 GB/sec/thread speed
           12.398 GB/sec total speed

   # Running  8x1-bw-process-NOTHP, "perf bench numa mem -p 8 -t 1 -P  512 -s 20 -zZ0q --thp  1 --thp -1"
           20.216 secs slowest (max) thread-runtime
           20.000 secs fastest (min) thread-runtime
           20.113 secs average thread-runtime
            0.535 % difference between max/avg runtime
           30.998 GB data processed, per thread
          247.981 GB data processed, total
            0.652 nsecs/byte/thread runtime
            1.533 GB/sec/thread speed
           12.266 GB/sec total speed

   # Running 16x1-bw-process, "perf bench numa mem -p 16 -t 1 -P 256 -s 20 -zZ0q --thp  1"
           20.234 secs slowest (max) thread-runtime
           20.000 secs fastest (min) thread-runtime
           20.174 secs average thread-runtime
            0.577 % difference between max/avg runtime
           15.377 GB data processed, per thread
          246.039 GB data processed, total
            1.316 nsecs/byte/thread runtime
            0.760 GB/sec/thread speed
           12.160 GB/sec total speed

   # Running  1x4-bw-thread, "perf bench numa mem -p 1 -t 4 -T 256 -s 20 -zZ0q --thp  1"
           20.040 secs slowest (max) thread-runtime
           20.000 secs fastest (min) thread-runtime
           20.028 secs average thread-runtime
            0.099 % difference between max/avg runtime
           66.832 GB data processed, per thread
          267.328 GB data processed, total
            0.300 nsecs/byte/thread runtime
            3.335 GB/sec/thread speed
           13.340 GB/sec total speed

   # Running  1x8-bw-thread, "perf bench numa mem -p 1 -t 8 -T 256 -s 20 -zZ0q --thp  1"
           20.064 secs slowest (max) thread-runtime
           20.000 secs fastest (min) thread-runtime
           20.034 secs average thread-runtime
            0.160 % difference between max/avg runtime
           32.911 GB data processed, per thread
          263.286 GB data processed, total
            0.610 nsecs/byte/thread runtime
            1.640 GB/sec/thread speed
           13.122 GB/sec total speed

   # Running 1x16-bw-thread, "perf bench numa mem -p 1 -t 16 -T 128 -s 20 -zZ0q --thp  1"
           20.092 secs slowest (max) thread-runtime
           20.000 secs fastest (min) thread-runtime
           20.052 secs average thread-runtime
            0.230 % difference between max/avg runtime
           16.131 GB data processed, per thread
          258.088 GB data processed, total
            1.246 nsecs/byte/thread runtime
            0.803 GB/sec/thread speed
           12.845 GB/sec total speed

   # Running 1x32-bw-thread, "perf bench numa mem -p 1 -t 32 -T 64 -s 20 -zZ0q --thp  1"
           20.099 secs slowest (max) thread-runtime
           20.000 secs fastest (min) thread-runtime
           20.063 secs average thread-runtime
            0.247 % difference between max/avg runtime
            7.962 GB data processed, per thread
          254.773 GB data processed, total
            2.525 nsecs/byte/thread runtime
            0.396 GB/sec/thread speed
           12.676 GB/sec total speed

   # Running  2x3-bw-process, "perf bench numa mem -p 2 -t 3 -P 512 -s 20 -zZ0q --thp  1"
           20.150 secs slowest (max) thread-runtime
           20.000 secs fastest (min) thread-runtime
           20.120 secs average thread-runtime
            0.372 % difference between max/avg runtime
           44.827 GB data processed, per thread
          268.960 GB data processed, total
            0.450 nsecs/byte/thread runtime
            2.225 GB/sec/thread speed
           13.348 GB/sec total speed

   # Running  4x4-bw-process, "perf bench numa mem -p 4 -t 4 -P 512 -s 20 -zZ0q --thp  1"
           20.258 secs slowest (max) thread-runtime
           20.000 secs fastest (min) thread-runtime
           20.168 secs average thread-runtime
            0.636 % difference between max/avg runtime
           17.079 GB data processed, per thread
          273.263 GB data processed, total
            1.186 nsecs/byte/thread runtime
            0.843 GB/sec/thread speed
           13.489 GB/sec total speed

   # Running  4x6-bw-process, "perf bench numa mem -p 4 -t 6 -P 512 -s 20 -zZ0q --thp  1"
           20.559 secs slowest (max) thread-runtime
           20.000 secs fastest (min) thread-runtime
           20.382 secs average thread-runtime
            1.359 % difference between max/avg runtime
           10.758 GB data processed, per thread
          258.201 GB data processed, total
            1.911 nsecs/byte/thread runtime
            0.523 GB/sec/thread speed
           12.559 GB/sec total speed

   # Running  4x8-bw-process, "perf bench numa mem -p 4 -t 8 -P 512 -s 20 -zZ0q --thp  1"
           20.744 secs slowest (max) thread-runtime
           20.000 secs fastest (min) thread-runtime
           20.516 secs average thread-runtime
            1.792 % difference between max/avg runtime
            8.069 GB data processed, per thread
          258.201 GB data processed, total
            2.571 nsecs/byte/thread runtime
            0.389 GB/sec/thread speed
           12.447 GB/sec total speed

   # Running  4x8-bw-process-NOTHP, "perf bench numa mem -p 4 -t 8 -P 512 -s 20 -zZ0q --thp  1 --thp -1"
           20.855 secs slowest (max) thread-runtime
           20.000 secs fastest (min) thread-runtime
           20.561 secs average thread-runtime
            2.050 % difference between max/avg runtime
            8.069 GB data processed, per thread
          258.201 GB data processed, total
            2.585 nsecs/byte/thread runtime
            0.387 GB/sec/thread speed
           12.381 GB/sec total speed

   # Running  3x3-bw-process, "perf bench numa mem -p 3 -t 3 -P 512 -s 20 -zZ0q --thp  1"
           20.134 secs slowest (max) thread-runtime
           20.000 secs fastest (min) thread-runtime
           20.077 secs average thread-runtime
            0.333 % difference between max/avg runtime
           28.091 GB data processed, per thread
          252.822 GB data processed, total
            0.717 nsecs/byte/thread runtime
            1.395 GB/sec/thread speed
           12.557 GB/sec total speed

   # Running  5x5-bw-process, "perf bench numa mem -p 5 -t 5 -P 512 -s 20 -zZ0q --thp  1"
           20.588 secs slowest (max) thread-runtime
           20.000 secs fastest (min) thread-runtime
           20.375 secs average thread-runtime
            1.427 % difference between max/avg runtime
           10.177 GB data processed, per thread
          254.436 GB data processed, total
            2.023 nsecs/byte/thread runtime
            0.494 GB/sec/thread speed
           12.359 GB/sec total speed

   # Running 2x16-bw-process, "perf bench numa mem -p 2 -t 16 -P 512 -s 20 -zZ0q --thp  1"
           20.657 secs slowest (max) thread-runtime
           20.000 secs fastest (min) thread-runtime
           20.429 secs average thread-runtime
            1.589 % difference between max/avg runtime
            8.170 GB data processed, per thread
          261.429 GB data processed, total
            2.528 nsecs/byte/thread runtime
            0.395 GB/sec/thread speed
           12.656 GB/sec total speed

   # Running 1x32-bw-process, "perf bench numa mem -p 1 -t 32 -P 2048 -s 20 -zZ0q --thp  1"
           22.981 secs slowest (max) thread-runtime
           20.000 secs fastest (min) thread-runtime
           21.996 secs average thread-runtime
            6.486 % difference between max/avg runtime
            8.863 GB data processed, per thread
          283.606 GB data processed, total
            2.593 nsecs/byte/thread runtime
            0.386 GB/sec/thread speed
           12.341 GB/sec total speed

   # Running numa02-bw, "perf bench numa mem -p 1 -t 32 -T 32 -s 20 -zZ0q --thp  1"
           20.047 secs slowest (max) thread-runtime
           19.000 secs fastest (min) thread-runtime
           20.026 secs average thread-runtime
            2.611 % difference between max/avg runtime
            8.441 GB data processed, per thread
          270.111 GB data processed, total
            2.375 nsecs/byte/thread runtime
            0.421 GB/sec/thread speed
           13.474 GB/sec total speed

   # Running numa02-bw-NOTHP, "perf bench numa mem -p 1 -t 32 -T 32 -s 20 -zZ0q --thp  1 --thp -1"
           20.088 secs slowest (max) thread-runtime
           19.000 secs fastest (min) thread-runtime
           20.025 secs average thread-runtime
            2.709 % difference between max/avg runtime
            8.411 GB data processed, per thread
          269.142 GB data processed, total
            2.388 nsecs/byte/thread runtime
            0.419 GB/sec/thread speed
           13.398 GB/sec total speed

   # Running numa01-bw-thread, "perf bench numa mem -p 2 -t 16 -T 192 -s 20 -zZ0q --thp  1"
           20.293 secs slowest (max) thread-runtime
           20.000 secs fastest (min) thread-runtime
           20.175 secs average thread-runtime
            0.721 % difference between max/avg runtime
            7.918 GB data processed, per thread
          253.374 GB data processed, total
            2.563 nsecs/byte/thread runtime
            0.390 GB/sec/thread speed
           12.486 GB/sec total speed

   # Running numa01-bw-thread-NOTHP, "perf bench numa mem -p 2 -t 16 -T 192 -s 20 -zZ0q --thp  1 --thp -1"
           20.411 secs slowest (max) thread-runtime
           20.000 secs fastest (min) thread-runtime
           20.226 secs average thread-runtime
            1.006 % difference between max/avg runtime
            7.931 GB data processed, per thread
          253.778 GB data processed, total
            2.574 nsecs/byte/thread runtime
            0.389 GB/sec/thread speed
           12.434 GB/sec total speed

  #

Signed-off-by: Ian Rogers <irogers@google.com>
Acked-by: Jiri Olsa <jolsa@redhat.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Link: https://lore.kernel.org/r/20201012161611.366482-1-irogers@google.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-10-14 14:24:53 -03:00
da9803dfd3 This feature enhances the current guest memory encryption support
called SEV by also encrypting the guest register state, making the
 registers inaccessible to the hypervisor by en-/decrypting them on world
 switches. Thus, it adds additional protection to Linux guests against
 exfiltration, control flow and rollback attacks.
 
 With SEV-ES, the guest is in full control of what registers the
 hypervisor can access. This is provided by a guest-host exchange
 mechanism based on a new exception vector called VMM Communication
 Exception (#VC), a new instruction called VMGEXIT and a shared
 Guest-Host Communication Block which is a decrypted page shared between
 the guest and the hypervisor.
 
 Intercepts to the hypervisor become #VC exceptions in an SEV-ES guest so
 in order for that exception mechanism to work, the early x86 init code
 needed to be made able to handle exceptions, which, in itself, brings
 a bunch of very nice cleanups and improvements to the early boot code
 like an early page fault handler, allowing for on-demand building of the
 identity mapping. With that, !KASLR configurations do not use the EFI
 page table anymore but switch to a kernel-controlled one.
 
 The main part of this series adds the support for that new exchange
 mechanism. The goal has been to keep this as much as possibly
 separate from the core x86 code by concentrating the machinery in two
 SEV-ES-specific files:
 
  arch/x86/kernel/sev-es-shared.c
  arch/x86/kernel/sev-es.c
 
 Other interaction with core x86 code has been kept at minimum and behind
 static keys to minimize the performance impact on !SEV-ES setups.
 
 Work by Joerg Roedel and Thomas Lendacky and others.
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEzv7L6UO9uDPlPSfHEsHwGGHeVUoFAl+FiKYACgkQEsHwGGHe
 VUqS5BAAlh5mKwtxXMyFyAIHa5tpsgDjbecFzy1UVmZyxN0JHLlM3NLmb+K52drY
 PiWjNNMi/cFMFazkuLFHuY0poBWrZml8zRS/mExKgUJC6EtguS9FQnRE9xjDBoWQ
 gOTSGJWEzT5wnFqo8qHwlC2CDCSF1hfL8ks3cUFW2tCWus4F9pyaMSGfFqD224rg
 Lh/8+arDMSIKE4uH0cm7iSuyNpbobId0l5JNDfCEFDYRigQZ6pZsQ9pbmbEpncs4
 rmjDvBA5eHDlNMXq0ukqyrjxWTX4ZLBOBvuLhpyssSXnnu2T+Tcxg09+ZSTyJAe0
 LyC9Wfo0v78JASXMAdeH9b1d1mRYNMqjvnBItNQoqweoqUXWz7kvgxCOp6b/G4xp
 cX5YhB6BprBW2DXL45frMRT/zX77UkEKYc5+0IBegV2xfnhRsjqQAQaWLIksyEaX
 nz9/C6+1Sr2IAv271yykeJtY6gtlRjg/usTlYpev+K0ghvGvTmuilEiTltjHrso1
 XAMbfWHQGSd61LNXofvx/GLNfGBisS6dHVHwtkayinSjXNdWxI6w9fhbWVjQ+y2V
 hOF05lmzaJSG5kPLrsFHFqm2YcxOmsWkYYDBHvtmBkMZSf5B+9xxDv97Uy9NETcr
 eSYk//TEkKQqVazfCQS/9LSm0MllqKbwNO25sl0Tw2k6PnheO2g=
 =toqi
 -----END PGP SIGNATURE-----

Merge tag 'x86_seves_for_v5.10' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull x86 SEV-ES support from Borislav Petkov:
 "SEV-ES enhances the current guest memory encryption support called SEV
  by also encrypting the guest register state, making the registers
  inaccessible to the hypervisor by en-/decrypting them on world
  switches. Thus, it adds additional protection to Linux guests against
  exfiltration, control flow and rollback attacks.

  With SEV-ES, the guest is in full control of what registers the
  hypervisor can access. This is provided by a guest-host exchange
  mechanism based on a new exception vector called VMM Communication
  Exception (#VC), a new instruction called VMGEXIT and a shared
  Guest-Host Communication Block which is a decrypted page shared
  between the guest and the hypervisor.

  Intercepts to the hypervisor become #VC exceptions in an SEV-ES guest
  so in order for that exception mechanism to work, the early x86 init
  code needed to be made able to handle exceptions, which, in itself,
  brings a bunch of very nice cleanups and improvements to the early
  boot code like an early page fault handler, allowing for on-demand
  building of the identity mapping. With that, !KASLR configurations do
  not use the EFI page table anymore but switch to a kernel-controlled
  one.

  The main part of this series adds the support for that new exchange
  mechanism. The goal has been to keep this as much as possibly separate
  from the core x86 code by concentrating the machinery in two
  SEV-ES-specific files:

    arch/x86/kernel/sev-es-shared.c
    arch/x86/kernel/sev-es.c

  Other interaction with core x86 code has been kept at minimum and
  behind static keys to minimize the performance impact on !SEV-ES
  setups.

  Work by Joerg Roedel and Thomas Lendacky and others"

* tag 'x86_seves_for_v5.10' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (73 commits)
  x86/sev-es: Use GHCB accessor for setting the MMIO scratch buffer
  x86/sev-es: Check required CPU features for SEV-ES
  x86/efi: Add GHCB mappings when SEV-ES is active
  x86/sev-es: Handle NMI State
  x86/sev-es: Support CPU offline/online
  x86/head/64: Don't call verify_cpu() on starting APs
  x86/smpboot: Load TSS and getcpu GDT entry before loading IDT
  x86/realmode: Setup AP jump table
  x86/realmode: Add SEV-ES specific trampoline entry point
  x86/vmware: Add VMware-specific handling for VMMCALL under SEV-ES
  x86/kvm: Add KVM-specific VMMCALL handling under SEV-ES
  x86/paravirt: Allow hypervisor-specific VMMCALL handling under SEV-ES
  x86/sev-es: Handle #DB Events
  x86/sev-es: Handle #AC Events
  x86/sev-es: Handle VMMCALL Events
  x86/sev-es: Handle MWAIT/MWAITX Events
  x86/sev-es: Handle MONITOR/MONITORX Events
  x86/sev-es: Handle INVD Events
  x86/sev-es: Handle RDPMC Events
  x86/sev-es: Handle RDTSC(P) Events
  ...
2020-10-14 10:21:34 -07:00
6873139ed0 objtool changes for v5.10:
- Most of the changes are cleanups and reorganization to make the objtool code
    more arch-agnostic. This is in preparation for non-x86 support.
 
 Fixes:
 
  - KASAN fixes.
  - Handle unreachable trap after call to noreturn functions better.
  - Ignore unreachable fake jumps.
  - Misc smaller fixes & cleanups.
 
 Signed-off-by: Ingo Molnar <mingo@kernel.org>
 -----BEGIN PGP SIGNATURE-----
 
 iQJFBAABCgAvFiEEBpT5eoXrXCwVQwEKEnMQ0APhK1gFAl+FgwIRHG1pbmdvQGtl
 cm5lbC5vcmcACgkQEnMQ0APhK1juGw/6A6goA5/HHapM965yG1eY/rTLp3eIbcma
 1ZbkUsP0YfT6wVUzw/sOeZzKNOwOq1FuMfkjuH2KcnlxlcMekIaKvLk8uauW4igM
 hbFGuuZfZ0An5ka9iQ1W6HGdsuD3vVlN1w/kxdWk0c3lJCVQSTxdCfzF8fuF3gxX
 lF3Bc1D/ZFcHIHT/hu/jeIUCgCYpD3qZDjQJBScSwVthZC+Fw6weLLGp2rKDaCao
 HhSQft6MUfDrUKfH3LBIUNPRPCOrHo5+AX6BXxLXJVxqlwO/YU3e0GMwSLedMtBy
 TASWo7/9GAp+wNNZe8EliyTKrfC3sLxN1QImfjuojxbBVXx/YQ/ToTt9fVGpF4Y+
 XhhRFv9520v1tS2wPHIgQGwbh7EWG6mdrmo10RAs/31ViONPrbEZ4WmcA08b/5FY
 KEkOVb18yfmDVzVZPpSc+HpIFkppEBOf7wPg27Bj3RTZmzIl/y+rKSnxROpsJsWb
 R6iov7SFVET14lHl1G7tPNXfqRaS7HaOQIj3rSUyAP0ZfX+yIupVJp32dc6Ofg8b
 SddUCwdIHoFdUNz4Y9csUCrewtCVJbxhV4MIdv0GpWbrgSw96RFZgetaH+6mGRpj
 0Kh6M1eC3irDbhBuarWUBAr2doPAq4iOUeQU36Q6YSAbCs83Ws2uKOWOHoFBVwCH
 uSKT0wqqG+E=
 =KX5o
 -----END PGP SIGNATURE-----

Merge tag 'objtool-core-2020-10-13' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull objtool updates from Ingo Molnar:
 "Most of the changes are cleanups and reorganization to make the
  objtool code more arch-agnostic. This is in preparation for non-x86
  support.

  Other changes:

   - KASAN fixes

   - Handle unreachable trap after call to noreturn functions better

   - Ignore unreachable fake jumps

   - Misc smaller fixes & cleanups"

* tag 'objtool-core-2020-10-13' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (21 commits)
  perf build: Allow nested externs to enable BUILD_BUG() usage
  objtool: Allow nested externs to enable BUILD_BUG()
  objtool: Permit __kasan_check_{read,write} under UACCESS
  objtool: Ignore unreachable trap after call to noreturn functions
  objtool: Handle calling non-function symbols in other sections
  objtool: Ignore unreachable fake jumps
  objtool: Remove useless tests before save_reg()
  objtool: Decode unwind hint register depending on architecture
  objtool: Make unwind hint definitions available to other architectures
  objtool: Only include valid definitions depending on source file type
  objtool: Rename frame.h -> objtool.h
  objtool: Refactor jump table code to support other architectures
  objtool: Make relocation in alternative handling arch dependent
  objtool: Abstract alternative special case handling
  objtool: Move macros describing structures to arch-dependent code
  objtool: Make sync-check consider the target architecture
  objtool: Group headers to check in a single list
  objtool: Define 'struct orc_entry' only when needed
  objtool: Skip ORC entry creation for non-text sections
  objtool: Move ORC logic out of check()
  ...
2020-10-14 10:13:37 -07:00
d5660df4a5 Merge branch 'akpm' (patches from Andrew)
Merge misc updates from Andrew Morton:
 "181 patches.

  Subsystems affected by this patch series: kbuild, scripts, ntfs,
  ocfs2, vfs, mm (slab, slub, kmemleak, dax, debug, pagecache, fadvise,
  gup, swap, memremap, memcg, selftests, pagemap, mincore, hmm, dma,
  memory-failure, vmallo and migration)"

* emailed patches from Andrew Morton <akpm@linux-foundation.org>: (181 commits)
  mm/migrate: remove obsolete comment about device public
  mm/migrate: remove cpages-- in migrate_vma_finalize()
  mm, oom_adj: don't loop through tasks in __set_oom_adj when not necessary
  memblock: use separate iterators for memory and reserved regions
  memblock: implement for_each_reserved_mem_region() using __next_mem_region()
  memblock: remove unused memblock_mem_size()
  x86/setup: simplify reserve_crashkernel()
  x86/setup: simplify initrd relocation and reservation
  arch, drivers: replace for_each_membock() with for_each_mem_range()
  arch, mm: replace for_each_memblock() with for_each_mem_pfn_range()
  memblock: reduce number of parameters in for_each_mem_range()
  memblock: make memblock_debug and related functionality private
  memblock: make for_each_memblock_type() iterator private
  mircoblaze: drop unneeded NUMA and sparsemem initializations
  riscv: drop unneeded node initialization
  h8300, nds32, openrisc: simplify detection of memory extents
  arm64: numa: simplify dummy_numa_init()
  arm, xtensa: simplify initialization of high memory pages
  dma-contiguous: simplify cma_early_percent_memory()
  KVM: PPC: Book3S HV: simplify kvm_cma_reserve()
  ...
2020-10-14 09:57:24 -07:00
caf7f9685d perf jevents: Fix event code for events referencing std arch events
The event code for events referencing std arch events is incorrectly
evaluated in json_events().

The issue is that je.event is evaluated properly from try_fixup(), but
later NULLified from the real_event() call, as "event" may be NULL.

Fix by setting "event" same je.event in try_fixup().

Also remove support for overwriting event code for events using std arch
events, as it is not used.

Signed-off-by: John Garry <john.garry@huawei.com>
Reviewed-By: Kajol Jain<kjain@linux.ibm.com>
Acked-by: Jiri Olsa <jolsa@redhat.com>
Link: https://lore.kernel.org/r/1602170368-11892-1-git-send-email-john.garry@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-10-14 13:43:31 -03:00
2a09a84c72 perf diff: Support hot streams comparison
This patch enables perf-diff with "--stream" option.

"--stream": Enable hot streams comparison

Now let's see example.

perf record -b ...      Generate perf.data.old with branch data
perf record -b ...      Generate perf.data with branch data
perf diff --stream

[ Matched hot streams ]

hot chain pair 1:
            cycles: 1, hits: 27.77%                  cycles: 1, hits: 9.24%
        ---------------------------              --------------------------
                      main div.c:39                           main div.c:39
                      main div.c:44                           main div.c:44

hot chain pair 2:
           cycles: 34, hits: 20.06%                cycles: 27, hits: 16.98%
        ---------------------------              --------------------------
          __random_r random_r.c:360               __random_r random_r.c:360
          __random_r random_r.c:388               __random_r random_r.c:388
          __random_r random_r.c:388               __random_r random_r.c:388
          __random_r random_r.c:380               __random_r random_r.c:380
          __random_r random_r.c:357               __random_r random_r.c:357
              __random random.c:293                   __random random.c:293
              __random random.c:293                   __random random.c:293
              __random random.c:291                   __random random.c:291
              __random random.c:291                   __random random.c:291
              __random random.c:291                   __random random.c:291
              __random random.c:288                   __random random.c:288
                     rand rand.c:27                          rand rand.c:27
                     rand rand.c:26                          rand rand.c:26
                           rand@plt                                rand@plt
                           rand@plt                                rand@plt
              compute_flag div.c:25                   compute_flag div.c:25
              compute_flag div.c:22                   compute_flag div.c:22
                      main div.c:40                           main div.c:40
                      main div.c:40                           main div.c:40
                      main div.c:39                           main div.c:39

hot chain pair 3:
             cycles: 9, hits: 4.48%                  cycles: 6, hits: 4.51%
        ---------------------------              --------------------------
          __random_r random_r.c:360               __random_r random_r.c:360
          __random_r random_r.c:388               __random_r random_r.c:388
          __random_r random_r.c:388               __random_r random_r.c:388
          __random_r random_r.c:380               __random_r random_r.c:380

[ Hot streams in old perf data only ]

hot chain 1:
            cycles: 18, hits: 6.75%
         --------------------------
          __random_r random_r.c:360
          __random_r random_r.c:388
          __random_r random_r.c:388
          __random_r random_r.c:380
          __random_r random_r.c:357
              __random random.c:293
              __random random.c:293
              __random random.c:291
              __random random.c:291
              __random random.c:291
              __random random.c:288
                     rand rand.c:27
                     rand rand.c:26
                           rand@plt
                           rand@plt
              compute_flag div.c:25
              compute_flag div.c:22
                      main div.c:40

hot chain 2:
            cycles: 29, hits: 2.78%
         --------------------------
              compute_flag div.c:22
                      main div.c:40
                      main div.c:40
                      main div.c:39

[ Hot streams in new perf data only ]

hot chain 1:
                                                     cycles: 4, hits: 4.54%
                                                 --------------------------
                                                              main div.c:42
                                                      compute_flag div.c:28

hot chain 2:
                                                     cycles: 5, hits: 3.51%
                                                 --------------------------
                                                              main div.c:39
                                                              main div.c:44
                                                              main div.c:42
                                                      compute_flag div.c:28

Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: https://lore.kernel.org/r/20201009022845.13141-8-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-10-14 13:34:48 -03:00
5bbd6bad3b perf streams: Report hot streams
We show the streams separately. They are divided into different sections.

1. "Matched hot streams"

2. "Hot streams in old perf data only"

3. "Hot streams in new perf data only".

For each stream, we report the cycles and hot percent (hits%).

For example,

     cycles: 2, hits: 4.08%
 --------------------------
              main div.c:42
      compute_flag div.c:28

Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: https://lore.kernel.org/r/20201009022845.13141-7-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-10-14 13:34:26 -03:00
28904f4dce perf streams: Calculate the sum of total streams hits
We have used callchain_node->hit to measure the hot level of one stream.
This patch calculates the sum of hits of total streams.

Thus in next patch, we can use following formula to report hot percent
for one stream.

hot percent = callchain_node->hit / sum of total hits

Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: https://lore.kernel.org/r/20201009022845.13141-6-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-10-14 13:34:06 -03:00
fa79aa6485 perf streams: Link stream pair
In previous patch, we have created an evsel_streams for one event, and
top N hottest streams will be saved in a stream array in evsel_streams.

This patch compares total streams among two evsel_streams.

Once two streams are fully matched, they will be linked as a pair. From
the pair, we can know which streams are matched.

Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: https://lore.kernel.org/r/20201009022845.13141-5-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-10-14 13:32:36 -03:00
47ef8398c3 perf streams: Compare two streams
Stream is the branch history which is aggregated by the branch records
from perf samples. Now we support the callchain as stream.

If the callchain entries of one stream are fully matched with the
callchain entries of another stream, we think two streams are matched.

For example,

   cycles: 1, hits: 26.80%                 cycles: 1, hits: 27.30%
   -----------------------                 -----------------------
             main div.c:39                           main div.c:39
             main div.c:44                           main div.c:44

Above two streams are matched (we don't consider the case that source
code is changed).

The matching logic is, compare the chain string first. If it's not
matched, fallback to dso address comparison.

Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: https://lore.kernel.org/r/20201009022845.13141-4-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-10-14 13:31:56 -03:00
dd1d841810 perf streams: Get the evsel_streams by evsel_idx
In previous patch, we have created evsel_streams array.

This patch returns the specified evsel_streams according to the
evsel_idx.

Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: https://lore.kernel.org/r/20201009022845.13141-3-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-10-14 13:30:13 -03:00
480accbb17 perf streams: Introduce branch history "streams"
We define a stream as the branch history which is aggregated by the
branch records from perf samples. For example, the callchains aggregated
from the branch records are considered as streams.  By browsing the hot
stream, we can understand the hot code path.

Now we only support the callchain for stream. For measuring the hot
level for a stream, we use the callchain_node->hit, higher is hotter.

There may be many callchains sampled so we only focus on the top N
hottest callchains. N is a user defined parameter or predefined default
value (nr_streams_max).

This patch creates an evsel_streams array per event, and saves the top N
hottest streams in a stream array.

So now we can get the per-event top N hottest streams.

Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: https://lore.kernel.org/r/20201009022845.13141-2-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-10-14 13:27:28 -03:00
6556a75bec perf intel-pt: Improve PT documentation slightly
Document the higher level --insn-trace etc. perf script options.

Include the howto how to build xed into the manpage

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Link: http://lore.kernel.org/lkml/20201014035346.4772-1-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-10-14 13:14:40 -03:00
0997a2662f perf tools: Add support for exclusive groups/events
Peter suggested that using the exclusive mode in perf could avoid some
problems with bad scheduling of groups. Exclusive is implemented in the
kernel, but wasn't exposed by the perf tool, so hard to use without
custom low level API users.

Add support for marking groups or events with :e for exclusive in the
perf tool.  The implementation is basically the same as the existing
pinned attribute.

Committer testing:

  # perf test "parse event"
   6: Parse event definition strings                                  : Ok
  # perf test -v "parse event" |& grep :u*e
  running test 56 'instructions:uep'
  running test 57 '{cycles,cache-misses,branch-misses}:e'
  #
  #
  # grep "model name" -m1 /proc/cpuinfo
  model name	: AMD Ryzen 9 3900X 12-Core Processor
  #
  # perf stat -a -e '{cycles,cache-misses,branch-misses}:e' sleep 1

   Performance counter stats for 'system wide':

       <not counted>      cycles                                                        (0.00%)
       <not counted>      cache-misses                                                  (0.00%)
       <not counted>      branch-misses                                                 (0.00%)

         1.001269893 seconds time elapsed

  Some events weren't counted. Try disabling the NMI watchdog:
  	echo 0 > /proc/sys/kernel/nmi_watchdog
  	perf stat ...
  	echo 1 > /proc/sys/kernel/nmi_watchdog
  # echo 0 > /proc/sys/kernel/nmi_watchdog
  # perf stat -a -e '{cycles,cache-misses,branch-misses}:e' sleep 1

   Performance counter stats for 'system wide':

       1,298,663,141      cycles
          30,962,215      cache-misses
           5,325,150      branch-misses

         1.001474934 seconds time elapsed

  #
  # The output for asking for precise events on AMD needs to improve, it
  # supposedly works only for system wide or per CPU
  #
  # perf stat -a -e '{cycles,cache-misses,branch-misses}:uep' sleep 1
  Error:
  The sys_perf_event_open() syscall returned with 22 (Invalid argument) for event (cycles).
  /bin/dmesg | grep -i perf may provide additional information.

  # perf stat -a -e '{cycles,cache-misses,branch-misses}:ue' sleep 1

   Performance counter stats for 'system wide':

         746,363,126      cycles
          16,881,611      cache-misses
           2,871,259      branch-misses

         1.001636066 seconds time elapsed

  #

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lore.kernel.org/lkml/20201014144255.22699-1-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-10-14 12:24:28 -03:00
78b2c50c5d perf test: Add build id shell test
Add a test for the build id cache that adds a binary with sha1 and md5
build ids and verifies it's added properly.

The test updates build id cache with 'perf record' and 'perf buildid-cache -a'.

Committer testing:

  # perf test "build id"
  82: build id cache operations                                       : Ok
  #
  # perf test -v "build id"
  82: build id cache operations                                       :
  --- start ---
  test child forked, pid 447218
  test binaries: /tmp/perf.ex.SHA1.B8I /tmp/perf.ex.MD5.7Nv
  Adding d1abc1eb7568358cf23c959566f23462461834d1 /tmp/perf.ex.SHA1.B8I: Ok
  build id: d1abc1eb7568358cf23c959566f23462461834d1
  link: /tmp/perf.debug.sS2/.build-id/d1/abc1eb7568358cf23c959566f23462461834d1
  file: /tmp/perf.debug.sS2/.build-id/d1/../../tmp/perf.ex.SHA1.B8I/d1abc1eb7568358cf23c959566f23462461834d1/elf
  OK for /tmp/perf.ex.SHA1.B8I
  Adding a50e350e97c43b4708d09bcd85ebfff7 /tmp/perf.ex.MD5.7Nv: Ok
  build id: a50e350e97c43b4708d09bcd85ebfff7
  link: /tmp/perf.debug.IuW/.build-id/a5/0e350e97c43b4708d09bcd85ebfff7
  file: /tmp/perf.debug.IuW/.build-id/a5/../../tmp/perf.ex.MD5.7Nv/a50e350e97c43b4708d09bcd85ebfff7/elf
  OK for /tmp/perf.ex.MD5.7Nv
  [ perf record: Woken up 1 times to write data ]
  [ perf record: Captured and wrote 0.034 MB /tmp/perf.data.xrH ]
  build id: d1abc1eb7568358cf23c959566f23462461834d1
  link: /tmp/perf.debug.eGR/.build-id/d1/abc1eb7568358cf23c959566f23462461834d1
  file: /tmp/perf.debug.eGR/.build-id/d1/../../tmp/perf.ex.SHA1.B8I/d1abc1eb7568358cf23c959566f23462461834d1/elf
  OK for /tmp/perf.ex.SHA1.B8I
  [ perf record: Woken up 2 times to write data ]
  [ perf record: Captured and wrote 0.034 MB /tmp/perf.data.cbE ]
  build id: a50e350e97c43b4708d09bcd85ebfff7
  link: /tmp/perf.debug.82t/.build-id/a5/0e350e97c43b4708d09bcd85ebfff7
  file: /tmp/perf.debug.82t/.build-id/a5/../../tmp/perf.ex.MD5.7Nv/a50e350e97c43b4708d09bcd85ebfff7/elf
  OK for /tmp/perf.ex.MD5.7Nv
  test child finished with 0
  ---- end ----
  build id cache operations: Ok
  #

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Ian Rogers <irogers@google.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Link: https://lore.kernel.org/r/20201013192441.1299447-10-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-10-14 11:28:52 -03:00
e9ad94381c perf tools: Align buildid list output for short build ids
With shorter md5 build ids we need to align their paths properly with
other build ids:

  $ perf buildid-list
  17f4e448cc746582ea1881528deb549f7fdb3fd5 [kernel.kallsyms]
  a50e350e97c43b4708d09bcd85ebfff7         .../tools/perf/buildid-ex-md5
  1805c738c8f3ec0f47b7ea09080c28f34d18a82b /usr/lib64/ld-2.31.so
  $

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Ian Rogers <irogers@google.com>
Link: https://lore.kernel.org/r/20201013192441.1299447-9-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-10-14 11:28:52 -03:00
b0a323c7f0 perf tools: Add size to 'struct perf_record_header_build_id'
We do not store size with build ids in perf data, but there's enough
space to do it. Adding misc bit PERF_RECORD_MISC_BUILD_ID_SIZE to mark
build id event with size.

With this fix the dso with md5 build id will have correct build id data
and will be usable for debuginfod processing if needed (coming in
following patches).

Committer notes:

Use %zu with size_t to fix this error on 32-bit arches:

  util/header.c: In function '__event_process_build_id':
  util/header.c:2105:3: error: format '%lu' expects argument of type 'long unsigned int', but argument 6 has type 'size_t' [-Werror=format=]
     pr_debug("build id event received for %s: %s [%lu]\n",
     ^

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Ian Rogers <irogers@google.com>
Link: https://lore.kernel.org/r/20201013192441.1299447-8-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-10-14 11:28:12 -03:00
39be8d0115 perf tools: Pass build_id object to dso__build_id_equal()
Passing build_id object to dso__build_id_equal(), so we can properly
check build id with different size than sha1.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Ian Rogers <irogers@google.com>
Link: https://lore.kernel.org/r/20201013192441.1299447-7-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-10-14 09:25:36 -03:00
8dfdf440d3 perf tools: Pass build_id object to dso__set_build_id()
Passing build_id object to dso__set_build_id(), so it's easier
to initialize dos's build id object.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Ian Rogers <irogers@google.com>
Link: https://lore.kernel.org/r/20201013192441.1299447-6-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-10-14 08:46:42 -03:00
bf5411695a perf tools: Pass build_id object to build_id__sprintf()
Passing build_id object to build_id__sprintf function, so it can operate
with the proper size of build id.

This will create proper md5 build id readable names,
like following:

  a50e350e97c43b4708d09bcd85ebfff7

instead of:

  a50e350e97c43b4708d09bcd85ebfff700000000

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Ian Rogers <irogers@google.com>
Link: https://lore.kernel.org/r/20201013192441.1299447-5-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-10-14 08:46:22 -03:00
3ff1b8c8cc perf tools: Pass build id object to sysfs__read_build_id()
Passing build id object to sysfs__read_build_id function, so it can
populate the size of the build_id object.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Ian Rogers <irogers@google.com>
Link: https://lore.kernel.org/r/20201013192441.1299447-4-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-10-14 08:46:02 -03:00
f766819cd5 perf tools: Pass build_id object to filename__read_build_id()
Pass a build_id object to filename__read_build_id function, so it can
populate the size of the build_id object.

Changing filename__read_build_id() code for both ELF/non-ELF code.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Ian Rogers <irogers@google.com>
Link: https://lore.kernel.org/r/20201013192441.1299447-3-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-10-14 08:45:16 -03:00
0aba7f036a perf tools: Use build_id object in dso
Replace build_id byte array with struct build_id object and all the code
that references it.

The objective is to carry size together with build id array, so it's
better to keep both together.

This is preparatory change for following patches, and there's no
functional change.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Ian Rogers <irogers@google.com>
Link: https://lore.kernel.org/r/20201013192441.1299447-2-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-10-14 08:44:47 -03:00
996f9e0f93 selftests/powerpc: Fix eeh-basic.sh exit codes
The kselftests test running infrastructure expects tests to finish with an
exit code of 4 if the test decided it should be skipped. Currently
eeh-basic.sh exits with the number of devices that failed to recover, so if
four devices didn't recover we'll report a skip instead of a fail.

Fix this by checking if the return code is non-zero and report success
and failure by returning 0 or 1 respectively. For the cases where should
actually skip return 4.

Fixes: 85d86c8aa52e ("selftests/powerpc: Add basic EEH selftest")
Signed-off-by: Oliver O'Halloran <oohall@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20201014024711.1138386-1-oohall@gmail.com
2020-10-14 22:03:39 +11:00
1100262037 selftests/vm: 8x compaction_test speedup
This patch reduces the running time for compaction_test from about 27 sec,
to 3.3 sec, which is about an 8x speedup.

These numbers are for an Intel x86_64 system with 32 GB of DRAM.

The compaction_test.c program was spending most of its time doing mmap(),
1 MB at a time, on about 25 GB of memory.

Instead, do the mmaps 100 MB at a time.  (Going past 100 MB doesn't make
things go much faster, because other parts of the program are using the
remaining time.)

Signed-off-by: John Hubbard <jhubbard@nvidia.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Acked-by: Sri Jayaramappa <sjayaram@akamai.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Mel Gorman <mgorman@techsingularity.net>
Link: https://lkml.kernel.org/r/20201002080621.551044-2-jhubbard@nvidia.com
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-10-13 18:38:34 -07:00
bfe18a0900 tools/testing/selftests/vm/hmm-tests.c: use the new SKIP() macro
Some tests might not be able to be run if resources like huge pages are
not available.  Mark these tests as skipped instead of simply passing.

Signed-off-by: Ralph Campbell <rcampbell@nvidia.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Cc: Jerome Glisse <jglisse@redhat.com>
Cc: John Hubbard <jhubbard@nvidia.com>
Cc: Shuah Khan <shuah@kernel.org>
Link: http://lkml.kernel.org/r/20200827190400.12608-1-rcampbell@nvidia.com
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-10-13 18:38:32 -07:00
34d109131f selftests/vm: fix incorrect gcc invocation in some cases
Avoid accidental wrong builds, due to built-in rules working just a little
bit too well--but not quite as well as required for our situation here.

In other words, "make userfaultfd" (for example) is supposed to fail to
build at all, because this Makefile only supports either "make" (all), or
"make /full/path".  However, the built-in rules, if not suppressed, will
pick up CFLAGS and the initial LDLIBS (but not the target-specific LDLIBS,
because those are only set for the full path target!).  This causes it to
get pretty far into building things despite using incorrect values such as
an *occasionally* incomplete LDLIBS value.

Signed-off-by: John Hubbard <jhubbard@nvidia.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Jason Gunthorpe <jgg@ziepe.ca>
Link: https://lkml.kernel.org/r/20200915012901.1655280-3-jhubbard@nvidia.com
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-10-13 18:38:31 -07:00
efc9511cec selftests/vm: fix false build success on the second and later attempts
Patch series "selftests/vm: fix some minor aggravating factors in the Makefile".

This fixes a couple of minor aggravating factors that I ran across while
trying to do some changes in selftests/vm.  These are simple things, but
like most things with GNU Make, it's rarely obvious what's wrong until you
understand *the entire Makefile and all of its includes*.

So while there is, of course, joy in learning those details, I thought I'd
fix these little things, so as to allow others to skip out on the Joy if
they so choose.  :)

First of all, if you have an item (let's choose userfaultfd for an
example) that fails to build, you might do this:

$ make -j32

    # ...you observe a failed item in the threaded output

# OK, let's get a closer look

$ make
    # ...but now the build quietly "succeeds".

That's what Patch 0001 fixes.

Second, if you instead attempt this approach for your closer look (a casual
mistake, as it's not supported):

$ make userfaultfd

    # ...userfaultfd fails to link, due to incomplete LDLIBS

That's what Patch 0002 fixes.

This patch (of 2):

If one or more of these selftest fail to build, then after the first
failure, subsequent invocations of "make" will make it appear that there
are no build failures, after all.

That's because the failed build products remain, with up-to-date
timestamps, thus tricking Make (and you!) into believing that there's
nothing else to build.

Fix this by telling Make to delete targets that didn't completely
succeed.

Signed-off-by: John Hubbard <jhubbard@nvidia.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Jason Gunthorpe <jgg@ziepe.ca>
Link: https://lkml.kernel.org/r/20200915012901.1655280-1-jhubbard@nvidia.com
Link: https://lkml.kernel.org/r/20200915012901.1655280-2-jhubbard@nvidia.com
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-10-13 18:38:31 -07:00
657d4f7996 mm/gup_benchmark: use pin_user_pages for FOLL_LONGTERM flag
According to Documentation/core-api/pin_user_pages.rst, FOLL_PIN is a
prerequisite to FOLL_LONGTERM.  Another way of saying that is,
FOLL_LONGTERM is a specific case, more restrictive case of FOLL_PIN.

Almost all kernel modules are using pin_user_pages() with FOLL_LONGTERM,
mm/gup_benchmark.c seems to the only exception in which FOLL_PIN is not a
prerequisite to FOLL_LONGTERM.

Signed-off-by: Barry Song <song.bao.hua@hisilicon.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
Cc: Jan Kara <jack@suse.cz>
Cc: Jérôme Glisse <jglisse@redhat.com>
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Chinner <david@fromorbit.com>
Cc: Jason Gunthorpe <jgg@ziepe.ca>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Vlastimil Babka <vbabka@suse.cz>
Link: http://lkml.kernel.org/r/20200815122056.29508-1-song.bao.hua@hisilicon.com
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-10-13 18:38:29 -07:00
60e93dc097 device-dax: add dis-contiguous resource support
Break the requirement that device-dax instances are physically contiguous.
With this constraint removed it allows fragmented available capacity to
be fully allocated.

This capability is useful to mitigate the "noisy neighbor" problem with
memory-side-cache management for virtual machines, or any other scenario
where a platform address boundary also designates a performance boundary.
For example a direct mapped memory side cache might rotate cache colors at
1GB boundaries.  With dis-contiguous allocations a device-dax instance
could be configured to contain only 1 cache color.

It also satisfies Joao's use case (see link) for partitioning memory for
exclusive guest access.  It allows for a future potential mode where the
host kernel need not allocate 'struct page' capacity up-front.

Reported-by: Joao Martins <joao.m.martins@oracle.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Ben Skeggs <bskeggs@redhat.com>
Cc: Bjorn Helgaas <bhelgaas@google.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Brice Goglin <Brice.Goglin@inria.fr>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Daniel Vetter <daniel@ffwll.ch>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Dave Jiang <dave.jiang@intel.com>
Cc: David Airlie <airlied@linux.ie>
Cc: David Hildenbrand <david@redhat.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Hulk Robot <hulkci@huawei.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Ira Weiny <ira.weiny@intel.com>
Cc: Jason Gunthorpe <jgg@mellanox.com>
Cc: Jason Yan <yanaijie@huawei.com>
Cc: Jeff Moyer <jmoyer@redhat.com>
Cc: "Jérôme Glisse" <jglisse@redhat.com>
Cc: Jia He <justin.he@arm.com>
Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: kernel test robot <lkp@intel.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Mike Rapoport <rppt@linux.ibm.com>
Cc: Paul Mackerras <paulus@ozlabs.org>
Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Vishal Verma <vishal.l.verma@intel.com>
Cc: Vivek Goyal <vgoyal@redhat.com>
Cc: Wei Yang <richard.weiyang@linux.alibaba.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/lkml/20200110190313.17144-1-joao.m.martins@oracle.com/
Link: https://lkml.kernel.org/r/159643104304.4062302.16561669534797528660.stgit@dwillia2-desk3.amr.corp.intel.com
Link: https://lkml.kernel.org/r/160106116875.30709.11456649969327399771.stgit@dwillia2-desk3.amr.corp.intel.com
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-10-13 18:38:28 -07:00
a4574f63ed mm/memremap_pages: convert to 'struct range'
The 'struct resource' in 'struct dev_pagemap' is only used for holding
resource span information.  The other fields, 'name', 'flags', 'desc',
'parent', 'sibling', and 'child' are all unused wasted space.

This is in preparation for introducing a multi-range extension of
devm_memremap_pages().

The bulk of this change is unwinding all the places internal to libnvdimm
that used 'struct resource' unnecessarily, and replacing instances of
'struct dev_pagemap'.res with 'struct dev_pagemap'.range.

P2PDMA had a minor usage of the resource flags field, but only to report
failures with "%pR".  That is replaced with an open coded print of the
range.

[dan.carpenter@oracle.com: mm/hmm/test: use after free in dmirror_allocate_chunk()]
  Link: https://lkml.kernel.org/r/20200926121402.GA7467@kadam

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>	[xen]
Cc: Paul Mackerras <paulus@ozlabs.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Vishal Verma <vishal.l.verma@intel.com>
Cc: Vivek Goyal <vgoyal@redhat.com>
Cc: Dave Jiang <dave.jiang@intel.com>
Cc: Ben Skeggs <bskeggs@redhat.com>
Cc: David Airlie <airlied@linux.ie>
Cc: Daniel Vetter <daniel@ffwll.ch>
Cc: Ira Weiny <ira.weiny@intel.com>
Cc: Bjorn Helgaas <bhelgaas@google.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: "Jérôme Glisse" <jglisse@redhat.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brice Goglin <Brice.Goglin@inria.fr>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Hulk Robot <hulkci@huawei.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Jason Gunthorpe <jgg@mellanox.com>
Cc: Jason Yan <yanaijie@huawei.com>
Cc: Jeff Moyer <jmoyer@redhat.com>
Cc: Jia He <justin.he@arm.com>
Cc: Joao Martins <joao.m.martins@oracle.com>
Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Cc: kernel test robot <lkp@intel.com>
Cc: Mike Rapoport <rppt@linux.ibm.com>
Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Wei Yang <richard.weiyang@linux.alibaba.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lkml.kernel.org/r/159643103173.4062302.768998885691711532.stgit@dwillia2-desk3.amr.corp.intel.com
Link: https://lkml.kernel.org/r/160106115761.30709.13539840236873663620.stgit@dwillia2-desk3.amr.corp.intel.com
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-10-13 18:38:28 -07:00
f5516ec5ef device-dax: make pgmap optional for instance creation
The passed in dev_pagemap is only required in the pmem case as the
libnvdimm core may have reserved a vmem_altmap for dev_memremap_pages() to
place the memmap in pmem directly.  In the hmem case there is no agent
reserving an altmap so it can all be handled by a core internal default.

Pass the resource range via a new @range property of 'struct
dev_dax_data'.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Cc: David Hildenbrand <david@redhat.com>
Cc: Vishal Verma <vishal.l.verma@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
Cc: Brice Goglin <Brice.Goglin@inria.fr>
Cc: Dave Jiang <dave.jiang@intel.com>
Cc: Ira Weiny <ira.weiny@intel.com>
Cc: Jia He <justin.he@arm.com>
Cc: Joao Martins <joao.m.martins@oracle.com>
Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Ben Skeggs <bskeggs@redhat.com>
Cc: Bjorn Helgaas <bhelgaas@google.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Daniel Vetter <daniel@ffwll.ch>
Cc: David Airlie <airlied@linux.ie>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Hulk Robot <hulkci@huawei.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Jason Gunthorpe <jgg@mellanox.com>
Cc: Jason Yan <yanaijie@huawei.com>
Cc: Jeff Moyer <jmoyer@redhat.com>
Cc: "Jérôme Glisse" <jglisse@redhat.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: kernel test robot <lkp@intel.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Mike Rapoport <rppt@linux.ibm.com>
Cc: Paul Mackerras <paulus@ozlabs.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Vivek Goyal <vgoyal@redhat.com>
Cc: Wei Yang <richard.weiyang@linux.alibaba.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lkml.kernel.org/r/159643099958.4062302.10379230791041872886.stgit@dwillia2-desk3.amr.corp.intel.com
Link: https://lkml.kernel.org/r/160106110513.30709.4303239334850606031.stgit@dwillia2-desk3.amr.corp.intel.com
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-10-13 18:38:28 -07:00
8b05418b25 seccomp updates for v5.10-rc1
- heavily refactor seccomp selftests (and clone3 selftests dependency) to
   fix powerpc (Kees Cook, Thadeu Lima de Souza Cascardo)
 - fix style issue in selftests (Zou Wei)
 - upgrade "unknown action" from KILL_THREAD to KILL_PROCESS (Rich Felker)
 - replace task_pt_regs(current) with current_pt_regs() (Denis Efremov)
 - fix corner-case race in USER_NOTIF (Jann Horn)
 - make CONFIG_SECCOMP no longer per-arch (YiFei Zhu)
 -----BEGIN PGP SIGNATURE-----
 
 iQJKBAABCgA0FiEEpcP2jyKd1g9yPm4TiXL039xtwCYFAl+E1LAWHGtlZXNjb29r
 QGNocm9taXVtLm9yZwAKCRCJcvTf3G3AJgRfD/0cq7W51+o34719vefC+oZaMjJJ
 Bd5HYshmr6NRpMqn0OhtT9kVi6OeV0sK0VJeNxSISDIaGNJ8xCI9YhnXwzY+7myK
 +IQu3i2Hv7dlWvTaXWFLL+mvfk6WopLntFGGJQ8KPMnP2gcfH2AZmOeAKGFGhBDe
 NwpAUZ9zriXg9JCQp6u0FzPJgk8KfgfHjUY6Hsa095gg0aPSJhc8bWEUNBQwjCe6
 uIcxDP/zK2WWaEhO9BfHt6/VTcXw7QgTLS3yM+pwBCgR1JHs7HMhtgcwPT410qES
 LmYD8OiHmv5AZhDjcCcNipKEv3ZnxkLnpU/6hfaKM4zn/DoaR/zbfjO9U017rcNV
 9gf7k5siAP7DH48IFlqf4Erzd3xyF0OJDnVfC7NiPtggPfO9aWOHJJZCuJRQOdrN
 qPMjkaQzFb02qb501PLEn55F24OLDjz1vFOqpkJm2/XamOBVV4uiRKmfpNEo/MOf
 QkhSvzvwEFErWwzPH95uFyVhs42stwnM3ppnwtya2+U5kxXdNvbAR8N5leH7siaU
 ab+YJIHW59+BxXTlKgXIcqBP/6RqJWJtuT9OqGs0K2A7FhQSexh5MOm+9vvGgIwZ
 Qjyijku8dB3aV94BNGnlJq6BV+4Hc6EGadh7h3b8GiRAUTYo0pk5G/iKL6Ii+R6p
 0msJENqalKFtNCr70w==
 =a4u2
 -----END PGP SIGNATURE-----

Merge tag 'seccomp-v5.10-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux

Pull seccomp updates from Kees Cook:
 "The bulk of the changes are with the seccomp selftests to accommodate
  some powerpc-specific behavioral characteristics. Additional cleanups,
  fixes, and improvements are also included:

   - heavily refactor seccomp selftests (and clone3 selftests
     dependency) to fix powerpc (Kees Cook, Thadeu Lima de Souza
     Cascardo)

   - fix style issue in selftests (Zou Wei)

   - upgrade "unknown action" from KILL_THREAD to KILL_PROCESS (Rich
     Felker)

   - replace task_pt_regs(current) with current_pt_regs() (Denis
     Efremov)

   - fix corner-case race in USER_NOTIF (Jann Horn)

   - make CONFIG_SECCOMP no longer per-arch (YiFei Zhu)"

* tag 'seccomp-v5.10-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux: (23 commits)
  seccomp: Make duplicate listener detection non-racy
  seccomp: Move config option SECCOMP to arch/Kconfig
  selftests/clone3: Avoid OS-defined clone_args
  selftests/seccomp: powerpc: Set syscall return during ptrace syscall exit
  selftests/seccomp: Allow syscall nr and ret value to be set separately
  selftests/seccomp: Record syscall during ptrace entry
  selftests/seccomp: powerpc: Fix seccomp return value testing
  selftests/seccomp: Remove SYSCALL_NUM_RET_SHARE_REG in favor of SYSCALL_RET_SET
  selftests/seccomp: Avoid redundant register flushes
  selftests/seccomp: Convert REGSET calls into ARCH_GETREG/ARCH_SETREG
  selftests/seccomp: Convert HAVE_GETREG into ARCH_GETREG/ARCH_SETREG
  selftests/seccomp: Remove syscall setting #ifdefs
  selftests/seccomp: mips: Remove O32-specific macro
  selftests/seccomp: arm64: Define SYSCALL_NUM_SET macro
  selftests/seccomp: arm: Define SYSCALL_NUM_SET macro
  selftests/seccomp: mips: Define SYSCALL_NUM_SET macro
  selftests/seccomp: Provide generic syscall setting macro
  selftests/seccomp: Refactor arch register macros to avoid xtensa special case
  selftests/seccomp: Use __NR_mknodat instead of __NR_mknod
  selftests/seccomp: Use bitwise instead of arithmetic operator for flags
  ...
2020-10-13 16:33:43 -07:00
79bbbabd22 perf config: Export the perf_config_from_file() function
We'll use it to ask for extra config files to be loaded, profile like
stuff that will be used first to make 'perf trace' mimic 'strace' output
via a 'perf strace' command that just sets up 'perf trace' output.

At some point it'll be used for regression tests, where we'll run some
simple commands like:

  perf strace ls > perf-strace.output
  strace ls > strace.output

And then do some mutable syscall arg aware diff like tool to deal with
arguments for things like mmap, that change at each execution, to be
first ignored and then properly tracked when used accoss multiple
syscalls.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Ian Rogers <irogers@google.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-10-13 17:03:19 -03:00
79373082fa perf python: Autodetect python3 binary
Some distros don't come with python2 and only have python3 available.
This causes the "'import perf' in python" self test to fail.

This change adds python3 to the list of possible python versions
that are autodetected but maintains the priorities for
'python2' and 'python' detection. Python3 has the lowest priority.

Committer notes:

On a fedora system without python2 packages the 'perf test python'
continues to work:

  # python2
  bash: python2: command not found...
  Similar command is: 'python'
  # rpm -qa | grep python2
  #

That "Similar command" gives the clue:

  # rpm -qf /usr/bin/python
  python-unversioned-command-3.8.5-5.fc32.noarch
  # rpm -ql python-unversioned-command
  /usr/bin/python
  /usr/share/man/man1/python.1.gz
  #

With it in place the 'python' binary is found and perf builds the python
binding using python3:

  # perf test -v python
  19: 'import perf' in python                                         :
  --- start ---
  test child forked, pid 379988
  python usage test: "echo "import sys ; sys.path.append('/tmp/build/perf/python'); import perf" | '/usr/bin/python' "
  test child finished with 0
  ---- end ----
  'import perf' in python: Ok
  #

Looking at that path:

  # ls -la /tmp/build/perf/python
  total 1864
  drwxrwxr-x.  2 acme acme      60 Oct 13 16:20 .
  drwxrwxr-x. 18 acme acme    4420 Oct 13 16:28 ..
  -rwxrwxr-x.  1 acme acme 1907216 Oct 13 16:28 perf.cpython-38-x86_64-linux-gnu.so
  #

And:

  # ldd ~/bin/perf | grep python
  	libpython3.8.so.1.0 => /lib64/libpython3.8.so.1.0 (0x00007f5471187000)
  #

As soon as we remove it:

  # rpm -e python-unversioned-command-3.8.5-5.fc32.noarch
  # hash -r
  # python
  bash: python: command not found...
  Install package 'python-unversioned-command' to provide command 'python'? [N/y] n
  #

And rebuilding perf now doesn't find python in the system:

  make: Entering directory '/home/acme/git/perf/tools/perf'
    BUILD:   Doing 'make -j24' parallel build
  <SNIP>
  Makefile.config:786: No python interpreter was found: disables Python support - please install python-devel/python-dev
  <SNIP>

After this patch:

  $ rpm -qi python-unversioned-command
  package python-unversioned-command is not installed
  $
  $ python
  bash: python: command not found...
  Install package 'python-unversioned-command' to provide command 'python'? [N/y] ^C
  $
  $ m
  make: Entering directory '/home/acme/git/perf/tools/perf'
    BUILD:   Doing 'make -j24' parallel build
  <SNIP>
    CC       /tmp/build/perf/tests/attr.o
    CC       /tmp/build/perf/tests/python-use.o
    DESCEND  plugins
    GEN      /tmp/build/perf/python/perf.so
    INSTALL  trace_plugins
    LD       /tmp/build/perf/tests/perf-in.o
    LD       /tmp/build/perf/perf-in.o
    LINK     /tmp/build/perf/perf
  <SNIP>
  make: Leaving directory '/home/acme/git/perf/tools/perf'
  19: 'import perf' in python                                         : Ok
  $ ldd ~/bin/perf | grep python
  	libpython3.8.so.1.0 => /lib64/libpython3.8.so.1.0 (0x00007f2c8c708000)
  $ ls -la /tmp/build/perf/python
  total 1864
  drwxrwxr-x.  2 acme acme      60 Oct 13 16:20 .
  drwxrwxr-x. 18 acme acme    4420 Oct 13 16:31 ..
  -rwxrwxr-x.  1 acme acme 1907216 Oct 13 16:31 perf.cpython-38-x86_64-linux-gnu.so
  $

Signed-off-by: James Clark <james.clark@arm.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
LPU-Reference: 20201005080645.6588-1-james.clark@arm.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2020-10-13 16:25:57 -03:00