[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Patches] [PATCH] ARM: NEON detected memcpy.
- To: Will Newton <will.newton@xxxxxxxxxx>
- Subject: Re: [Patches] [PATCH] ARM: NEON detected memcpy.
- From: OndÅej BÃlka <neleai@xxxxxxxxx>
- Date: Wed, 3 Apr 2013 11:18:55 +0200
On Wed, Apr 03, 2013 at 09:15:46AM +0100, Will Newton wrote:
> On 3 April 2013 08:58, Shih-Yuan Lee (FourDollars) <sylee@xxxxxxxxxxxxx> wrote:
> > Hi,
> >
> > I am working on the NEON detected memcpy.
> > This is based on what Siarhei Siamashka did at 2009 [1].
> >
> > The idea is to use HWCAP and check NEON bit.
> > If there is a NEON bit, using NEON optimized memcpy.
> > If not, using the original memcpy instead.
> >
> > If using NEON optimized memcpy, the performance of memcpy will be
> > raised up by about 50% [2].
> >
> > How do you think about this idea? Any comment is welcome.
>
> Hi,
>
> I am working on a similar project within Linaro, which is to add the
> NEON/VFP capable memcpy from cortex-strings[1] to glibc. However I am
> looking at enabling it at runtime via indirect functions which makes
> it slightly more complex than just importing the cortex strings code,
> so I don't have any patches to show you just yet.
>
> [1] https://launchpad.net/cortex-strings
Hi,
You need to optimize header beacuse you typically copy less than 128 bytes.
My measurement how many 16 byte blocks are used is here.
http://kam.mff.cuni.cz/~ondra/benchmark_string/profile/result.html
If I had code to get number of cycles from perf counter I could provide
tool to see memcpy performance in arbitrary binary.
On x64 I used overlapping load/store to minimize branches. Try how attached
memcpy works on small inputs.
#include <stdint.h>
#include <stdlib.h>
/* Align VALUE down by ALIGN bytes. */
#define ALIGN_DOWN(value, align) \
ALIGN_DOWN_M1(value, align - 1)
/* Align VALUE down by ALIGN_M1 + 1 bytes.
Useful if you have precomputed ALIGN - 1. */
#define ALIGN_DOWN_M1(value, align_m1) \
(void *)((uintptr_t)(value) \
& ~(uintptr_t)(align_m1))
/* Align VALUE up by ALIGN bytes. */
#define ALIGN_UP(value, align) \
ALIGN_UP_M1(value, align - 1)
/* Align VALUE up by ALIGN_M1 + 1 bytes.
Useful if you have precomputed ALIGN - 1. */
#define ALIGN_UP_M1(value, align_m1) \
(void *)(((uintptr_t)(value) + (uintptr_t)(align_m1)) \
& ~(uintptr_t)(align_m1))
#define STOREU(x,y) STORE(x,y)
#define STORE(x,y) ((uint64_t*)(x))[0]=((uint64_t*)(y))[0]; ((uint64_t*)(x))[1]=((uint64_t*)(y))[1];
#define LOAD(x) x
#define LOADU(x) x
static char *memcpy_small (char *dest, char *src, size_t no, char *ret);
void *memcpy_new_u(char *dest, char *src, size_t n)
{
char *from,*to;
if (n < 16)
{
return memcpy_small(dest, src, n, dest);
}
else
{
STOREU(dest, LOADU(src));
STOREU(dest + n - 16, LOADU(src + n - 16));
to = ALIGN_DOWN(dest + n, 16);
from = ALIGN_DOWN(src + 16, 16);
dest += src - from;
src = from;
from = dest;
while (from != to)
{
STOREU(from, LOAD(src));
from += 16;
src += 16;
}
}
return dest;
}
static char *memcpy_small (char *dest, char *src, size_t no, char *ret)
{
if (no & (8 + 16))
{
((uint64_t *) dest)[0] = ((uint64_t *) src)[0];
((uint64_t *)(dest + no - 8))[0] = ((uint64_t *)(src + no - 8))[0];
return ret;
}
if (no & 4)
{
((uint32_t *) dest)[0] = ((uint32_t *) src)[0];
((uint32_t *)(dest + no - 4))[0] = ((uint32_t *)(src + no - 4))[0];
return ret;
}
dest[0] = src[0];
if (no & 2)
{
((uint16_t *)(dest + no - 2))[0] = ((uint16_t *)(src + no - 2))[0];
return ret;
}
return ret;
}
_______________________________________________
Patches mailing list
Patches@xxxxxxxxxx
http://eglibc.org/cgi-bin/mailman/listinfo/patches