Skip to content
GitLab
Projects Groups Snippets
  • /
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in / Register
  • D dynamorio
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
  • Issues 1,467
    • Issues 1,467
    • List
    • Boards
    • Service Desk
    • Milestones
  • Merge requests 44
    • Merge requests 44
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
  • Deployments
    • Deployments
    • Environments
    • Releases
  • Packages and registries
    • Packages and registries
    • Package Registry
    • Infrastructure Registry
  • Monitor
    • Monitor
    • Incidents
  • Analytics
    • Analytics
    • Value stream
    • CI/CD
    • Repository
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • DynamoRIO
  • dynamorio
  • Merge requests
  • !4390

i#2390: Replace add+ldar with ldr+dmb in AArch64 HT lookup

  • Review changes

  • Download
  • Email patches
  • Plain diff
Open Merck Hung requested to merge github/fork/merckhung/aarch64_perf into master Aug 02, 2020
  • Overview 24
  • Commits 8
  • Pipelines 0
  • Changes 7

Replace the add+ldar with a ldr+dmb pair in HT lookup and results in -3% to -28% (reduction) in overhead (instrument vs. native) ratios of SPECInt 2006 and 2017. Although, found a regression in 657.xz_s model (+7%).

An add+ldar pair was used to prevent memory-access instructions from being reordered to ensure a hash mask is always loaded before a hash_table is loaded. The same ordering restriction is also imposed in the corresponding update_lookuptable_tls() routine.

With the ldr+dmb pair replacement, an add instruction is eliminated from the critical path and contributed majorly to overhead reductions.

In addition, a frequently taken branch of the inner-loop epilogue is converged into a more condensed (4 instr.) and smaller inner-loop, with the cost of adding 1 more sub instruction in both 2 exit paths.

Since exit paths are not as hot as HT lookup, the impact of adding a sub instruction in both is quite trivial (not seen).

In order to condense the inner-loop down to 4 instructions, a pre-index ldr is replaced with a post-index version in the prologue.

On the hit-exit path, register use is carefully swapped and results in a move instruction elimination. The number of instructions on the path remains unchanged since a sub instruction is added to its entry.

A new opnd_set_zero_offset_post_index() function is created to set the pre_index field of a memop operand to false (post-indexed).

INSTR_CREATE_ldr_imm macro is created to take Rd, memop, and imm operands (DR way), instead of Rd, Rt, Imm (assembly mnemonics). The opnd_set_zero_offset_post_index() on a memop allows selecting a post-indexed encoding if needed (pre-indexed by default).

If the developers want a post-indexed LDR, they are supposed to call opnd_set_zero_offset_pre_index() on the just generated memop to set the pre_index field from true to false (indicating post-indexed), see an example in core/arch/aarch64/emit_utils.c

core/ir/aarchxx/instr_create_aarchxx.h is created to share instr. macros between AArch64 and ARM ports. INSTR_CREATE_dmb and its enums are moved into the new file.

Issue: #2390 Tests: SPECInt 2006 and 2017 ran on ARM Juno r2 (w/ 8GB RAM, Debian)

Assignee
Assign to
Reviewers
Request review from
Time tracking
Source branch: github/fork/merckhung/aarch64_perf