Overview Features Coding ApolloOS Performance Forum Downloads Products Order Contact

Welcome to the Apollo Forum

This forum is for people interested in the APOLLO CPU.
Please read the forum usage manual.
Please visit our Apollo-Discord Server for support.



All TopicsNewsPerformanceGamesDemosApolloVampireAROSWorkbenchATARIReleases
Information about the Apollo CPU and FPU.

GCC Improvement for 68080page  1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 

Gerardo G.

Posts 54
15 Aug 2019 08:13


I was thinking too that GCC is too complex to start with it. Let's not forget too Tiny C Compiler TCC EXTERNAL LINK Fabrice Bellard has ever developed great things, including TinyGL, FFMPEG...


Steve Ferrell

Posts 424
15 Aug 2019 11:16


The HCC version of the Sozobon C compiler could be a possible candidate.  EXTERNAL LINK   
Neozeed had it generating assembler output to the a68K assembler found here:  EXTERNAL LINK   
He even used it to cross-compile on Windows to an Amiga using BLINK as the linker.  There's a good blog post about it here:  EXTERNAL LINK 
The source to the compiler is here:  EXTERNAL LINK   
 


Gunnar von Boehn
(Apollo Team Member)
Posts 6197
15 Aug 2019 13:36


Improving GCC is generally a good idea.



Gildo Addox

Posts 31
15 Aug 2019 15:19


One question is which compiler.

The other question is - who's improving it.

We had (have) Bebbo working on GCC. Are there any news if he's still working on it?


Steve Ferrell

Posts 424
15 Aug 2019 17:38


Gunnar von Boehn wrote:

Improving GCC is generally a good idea.
 

Unfortunately, people who are experts in C/C++, compiler design, and 68K assembly language are more scarce than chickens with lips.  Hopefully one or two still exist who have the time and motivation to assist with updating GCC or one of the other compilers. 


Samuel Devulder

Posts 248
15 Aug 2019 19:56


Gildo Addox wrote:

We had (have) Bebbo working on GCC. Are there any news if he's still working on it?

From git-hub commit, one can see he is doing things, yes.

Now concerning the various compilers listed (DCC, HCC, Aztec-C, TCC, etc.). Please consider that none of these were designed to generate code specifically tuned for "recent" cpu achitecture. I mean none of them has a decent optimizer, not even a scheduler, and and all are lacking support for adding new cpu. Moreother none of them is compliant with latest C or C++ standards. There is nothing more we can get from them than what they are currently producing, that is 040 code quality at best.

New compiler architecture like GCC 6+ (and LLVM) is definitely the good move to produce good asm quality code from C or C++. Other older compiler do produce code that works fine, but don't produce the asm-quality needed to get on pair with what was done 20-year ago on x86. The poor asm quality has direct impact on ported code. Many SDL-based games suffer from this. They are plain slow on 68080 and need HUGE work to be really playable on the vamp. Having a GCC that really uses all of the 68080 instruction set cleverly is a must. Without it, porting games that were made 15-20 years ago on a PC is pointless, because they'll be too slow to be enjoyable.


Kamelito Loveless

Posts 259
15 Aug 2019 20:58


Dice as also been compiler on modern system to cross compile.
EXTERNAL LINK


Kamelito Loveless

Posts 259
15 Aug 2019 20:59


Is it in active development?


Gerardo G.

Posts 54
16 Aug 2019 09:19


Samuel Devulder wrote:

Gildo Addox wrote:

  We had (have) Bebbo working on GCC. Are there any news if he's still working on it?
 

  From git-hub commit, one can see he is doing things, yes.
 
  Now concerning the various compilers listed (DCC, HCC, Aztec-C, TCC, etc.). Please consider that none of these were designed to generate code specifically tuned for "recent" cpu achitecture. I mean none of them has a decent optimizer, not even a scheduler, and and all are lacking support for adding new cpu. Moreother none of them is compliant with latest C or C++ standards. There is nothing more we can get from them than what they are currently producing, that is 040 code quality at best.
 
  New compiler architecture like GCC 6+ (and LLVM) is definitely the good move to produce good asm quality code from C or C++. Other older compiler do produce code that works fine, but don't produce the asm-quality needed to get on pair with what was done 20-year ago on x86. The poor asm quality has direct impact on ported code. Many SDL-based games suffer from this. They are plain slow on 68080 and need HUGE work to be really playable on the vamp. Having a GCC that really uses all of the 68080 instruction set cleverly is a must. Without it, porting games that were made 15-20 years ago on a PC is pointless, because they'll be too slow to be enjoyable.

I think we are talking about GCC because someone started to improve GCC. I think too that Clang + LLVM would be a better longer term option, in case the team want to invest a huge effort, because, at least, has a much more cleaner code than GCC.

If there is not a mid-size team of several people to perform this task, it is much better talking again about other compilers listed here: VBCC, ICC, TCC... Despite being old, many of them are C99 compatible, enough for now, and they have much more simple and compact source code. Easy to modify by one person or an small team.

Yes, GCC, Clang (+ LLVM) are theoretically better and compatible with "recent" cpu achitectures but, again, the effort needed to adapt and maintain these compilers would be soo huge, and a simpler compiler with optimizations in assembler could give same or better results. For example, few years ago we used Angular.js for frontend web development. Is it great? Yes it is, but sometimes you do not have the control and you start to need a bigger team to maintain code. We changed to Vue.js and much better and simpler and on latest projects, Mithril.js that is much, much smaller and simpler, but you have all under control with an small team. Is the most advanced option? No. Is the right one for an small dev team? Definetly, yes.

I would never say that GCC is better as well as I would never say that X86 or ARM are better too. ¿Better for what kind of scenario?

I think it is important to ask what is the target here: Bring to Amiga the same applications from the Windows PC or Linux and run them dozens of times slower or create lightweight applications optimized for the Amiga, compatible with the 68K, that make use of the extra power that does the 68K80 offer? ¿Are there teams of hundred of people working on Amiga apps or small teams of 1 to less than 10 people?

As ever, it is only my opinion :) Please, hope nobody feels bad because this :P


Mr Niding

Posts 459
16 Aug 2019 09:42


Gerardo González-Trejo wrote:

Samuel Devulder wrote:

 
Gildo Addox wrote:

  We had (have) Bebbo working on GCC. Are there any news if he's still working on it?
 

  From git-hub commit, one can see he is doing things, yes.
 
  Now concerning the various compilers listed (DCC, HCC, Aztec-C, TCC, etc.). Please consider that none of these were designed to generate code specifically tuned for "recent" cpu achitecture. I mean none of them has a decent optimizer, not even a scheduler, and and all are lacking support for adding new cpu. Moreother none of them is compliant with latest C or C++ standards. There is nothing more we can get from them than what they are currently producing, that is 040 code quality at best.
 
  New compiler architecture like GCC 6+ (and LLVM) is definitely the good move to produce good asm quality code from C or C++. Other older compiler do produce code that works fine, but don't produce the asm-quality needed to get on pair with what was done 20-year ago on x86. The poor asm quality has direct impact on ported code. Many SDL-based games suffer from this. They are plain slow on 68080 and need HUGE work to be really playable on the vamp. Having a GCC that really uses all of the 68080 instruction set cleverly is a must. Without it, porting games that were made 15-20 years ago on a PC is pointless, because they'll be too slow to be enjoyable.
 

 
  I think we are talking about GCC because someone started to improve GCC. I think too that Clang + LLVM would be a better longer term option, in case the team want to invest a huge effort, because, at least, has a much more cleaner code than GCC.
 
  If there is not a mid-size team of several people to perform this task, it is much better talking again about other compilers listed here: VBCC, ICC, TCC... Despite being old, many of them are C99 compatible, enough for now, and they have much more simple and compact source code. Easy to modify by one person or an small team.
 
  Yes, GCC, Clang (+ LLVM) are theoretically better and compatible with "recent" cpu achitectures but, again, the effort needed to adapt and maintain these compilers would be soo huge, and a simpler compiler with optimizations in assembler could give same or better results. For example, few years ago we used Angular.js for frontend web development. Is it great? Yes it is, but sometimes you do not have the control and you start to need a bigger team to maintain code. We changed to Vue.js and much better and simpler and on latest projects, Mithril.js that is much, much smaller and simpler, but you have all under control with an small team. Is the most advanced option? No. Is the right one for an small dev team? Definetly, yes.
 
  I would never say that GCC is better as well as I would never say that X86 or ARM are better too. ¿Better for what kind of scenario?
 
  I think it is important to ask what is the target here: Bring to Amiga the same applications from the Windows PC or Linux and run them dozens of times slower or create lightweight applications optimized for the Amiga, compatible with the 68K, that make use of the extra power that does the 68K80 offer? ¿Are there teams of hundred of people working on Amiga apps or small teams of 1 to less than 10 people?
 
  As ever, it is only my opinion :) Please, hope nobody feels bad because this :P

Very good post, and from a non-developer point of view, makes sense. It feels to me that Gunnar wants clean and efficient code as the baseline for Amiga in general, and Apollo in spesific, since ALOT of performance is wasted on abstraction layers and inefficient code.
C/C++ coders like Steve can probably better discuss this topic with you tho directly :)

I guess the tradeoff is that people (and developers) see re-compiling programs from other platforms a way to offset the lack of "up to date" software, be it games or productivity.

From a non-developer point of view;
I use computers for productivity; LibreWord/Calc and Vegas (video editing). That constitute the majority of my usage (in addition to emails via hotmail, gmail, ProtonMail and Tutanota).

Videoediting is probably close to a no-starter vs AMD Ryzen performance, but being able to recompile file extensions/readability of LibreOffice would be a massive addition to Apollo.
Same with a reasonably updated PDF reader.

Using native/older compilers, altho updated; would they be able to keep this re-compiling process relatively easy, or do these compilers basically mean writing from the ground up?


Olivier Landemarre

Posts 147
16 Aug 2019 12:16


Gerardo González-Trejo wrote:

Samuel Devulder wrote:

 
Gildo Addox wrote:

  We had (have) Bebbo working on GCC. Are there any news if he's still working on it?
 

  From git-hub commit, one can see he is doing things, yes.
 
  Now concerning the various compilers listed (DCC, HCC, Aztec-C, TCC, etc.). Please consider that none of these were designed to generate code specifically tuned for "recent" cpu achitecture. I mean none of them has a decent optimizer, not even a scheduler, and and all are lacking support for adding new cpu. Moreother none of them is compliant with latest C or C++ standards. There is nothing more we can get from them than what they are currently producing, that is 040 code quality at best.
 
  New compiler architecture like GCC 6+ (and LLVM) is definitely the good move to produce good asm quality code from C or C++. Other older compiler do produce code that works fine, but don't produce the asm-quality needed to get on pair with what was done 20-year ago on x86. The poor asm quality has direct impact on ported code. Many SDL-based games suffer from this. They are plain slow on 68080 and need HUGE work to be really playable on the vamp. Having a GCC that really uses all of the 68080 instruction set cleverly is a must. Without it, porting games that were made 15-20 years ago on a PC is pointless, because they'll be too slow to be enjoyable.
 

 
  I think we are talking about GCC because someone started to improve GCC. I think too that Clang + LLVM would be a better longer term option, in case the team want to invest a huge effort, because, at least, has a much more cleaner code than GCC.
 
  If there is not a mid-size team of several people to perform this task, it is much better talking again about other compilers listed here: VBCC, ICC, TCC... Despite being old, many of them are C99 compatible, enough for now, and they have much more simple and compact source code. Easy to modify by one person or an small team.
 
  Yes, GCC, Clang (+ LLVM) are theoretically better and compatible with "recent" cpu achitectures but, again, the effort needed to adapt and maintain these compilers would be soo huge, and a simpler compiler with optimizations in assembler could give same or better results. For example, few years ago we used Angular.js for frontend web development. Is it great? Yes it is, but sometimes you do not have the control and you start to need a bigger team to maintain code. We changed to Vue.js and much better and simpler and on latest projects, Mithril.js that is much, much smaller and simpler, but you have all under control with an small team. Is the most advanced option? No. Is the right one for an small dev team? Definetly, yes.
 
  I would never say that GCC is better as well as I would never say that X86 or ARM are better too. ¿Better for what kind of scenario?
 
  I think it is important to ask what is the target here: Bring to Amiga the same applications from the Windows PC or Linux and run them dozens of times slower or create lightweight applications optimized for the Amiga, compatible with the 68K, that make use of the extra power that does the 68K80 offer? ¿Are there teams of hundred of people working on Amiga apps or small teams of 1 to less than 10 people?
 
  As ever, it is only my opinion :) Please, hope nobody feels bad because this :P

Hello

I understand very well your point of view it make sense, me too I prefer compil with a fast small compiler as PureC in Atari world, but for a long time I use GCC rather PureC to port application and library and most of theim will not compil with anything else and port this code are more than an huge work. So GCC make sense too me to work on it, even if it is more complex. I don't know for clang + LLVM, perhaps this compiler is able to do the same as GCC but I don't know if there is port or if it is easy.

Olivier



Samuel Devulder

Posts 248
16 Aug 2019 13:45


Notice that you won't get first class compiler by little delta-improvements of old ones. Optimization needed for recent CPUs (68080 included) require new ways to compile C/C++ code from what was used in the past. Think about the way LLVM-based compiler works. It is completely different from the way gcc2.95 works for instance. Same for GCC. Most of it is completely recreated from scratch at each major release. There are very good reasons for that.
 
To sum up: you won't invent LED-based light by improving filament-based lights. You need a totaly different approach. Same works with compilers. I think it is pointless trying to improve old ones. They already give the best they can and there is marginally no room for enhancements. Better start like Bebbo did with a fairly recent and portable compiler, containing the state-of-the art of compilation, and adapt it to a the new instruction-set-architecture.


Grom 68k

Posts 61
21 Aug 2019 21:27


Grom 68k wrote:

      Hi,
         
          I found the function exact_real_truncate in gcc/real.c. It will be cross-compilation safe.
         
const char *
          output_move_const_double (rtx *operands)
          {
            int code = standard_68881_constant_p (operands[1]);
         
            if (code != 0)
              {
                static char buf[40];
         
                sprintf (buf, "fmovecr #0x%x,%%0", code & 0xff);
                return buf;
              }
            REAL_VALUE_TYPE r;
            r = *CONST_DOUBLE_REAL_VALUE ((operands[1]);
            if (exact_real_truncate (SFmode, &r))
              return "fmove%.s %f1,%0";

            return "fmove%.d %1,%0";
          }

          I don't find where fmove string is used. I would like to understand why %1 is sometime replaced by %f1. I can check too if operands[1] conversion in single is implicit.
          Alike, I don't find where fadd is generated.
         
          Regards
       
        EDIT: It should be %1 in output_move_const_single instead of %f1. What's why I have a doubt.
       
        ;;- Operand classes for the register allocator:
        ;;- 'a' one of the address registers can be used.
        ;;- 'd' one of the data registers can be used.
        ;;- 'f' one of the m68881/fpu registers can be used
        ;;- 'r' either a data or an address register can be used.
     

     
      Hi,
     
      Bebbo makes it better :)  EXTERNAL LINK     
     
      How works the pipeline in the second case ? Is a cycle lost like this ?

              fdadd.s #0x41840000,fp1
              ; wait cycle
              fdadd.x fp1,fp0

     
      Can we write  fdadd.s #16.5,fp1 ?

      return 15-d; can be improved as for integer. EXTERNAL LINK   

      Regards


Samuel Devulder

Posts 248
22 Aug 2019 01:14


Grom 68k wrote:

        How works the pipeline in the second case ? Is a cycle lost like this ?

                fdadd.s #0x41840000,fp1
                ; wait cycle
                fdadd.x fp1,fp0

 

  Not *a* cycle, but 5 cycled actually to get result of first addition in fp1.
 

        Can we write  fdadd.s #16.5,fp1 ?
 

  I do this with vasm.
 

        return 15-d; can be improved as for integer. EXTERNAL LINK 

  Try (d-15). The code is good, then if you do -(d-15), I'd expected to get a simple fneg being added, but gcc gives the same result as with 15-d. It doesn't like fneg for some reason but fneg is really a cheap instruction (1 cycle, only 1 bit of the register is flipped).


Grom 68k

Posts 61
22 Aug 2019 07:26


I would like to write a example like this:

    fdadd.w #16,fp1
    fdadd.x fp2,fp3
    ; after conversion, the 2 instructions probably run the same stage of the pipeline

How write the pipeline for instructions with conversion ?
(Example: "fconv_pipeline, f0_pipeline, f1_pipeline, f2_pipeline, f3_pipeline, f4_pipeline, f5_pipeline")



    Can we write  fdadd.s #16.5,fp1 ?

  I do this with vasm.

Github commit already distinguishes VASM format. This could be cool but it's only cosmetic.


#ifndef TARGET_AMIGAOS_VASM
      sprintf (buf, "%sd #0x%lx%08lx,%%0", cmd, l2[0] & 0xFFFFFFFF, l2[1] & 0xFFFFFFFF);
#else
        sprintf (buf, "%sd #$%lx%08lx,%%0", cmd, l2[0] & 0xFFFFFFFF, l2[1] & 0xFFFFFFFF);
#endif



    return 15-d; can be improved as for integer. EXTERNAL LINK 

  Try (d-15). The code is good, then if you do -(d-15), I'd expected to get a simple fneg being added, but gcc gives the same result as with 15-d. It doesn't like fneg for some reason but fneg is really a cheap instruction (1 cycle, only 1 bit of the register is flipped).


It's not a very interesting optimisation but it can save a fp register.



Grom 68k

Posts 61
22 Aug 2019 07:51


Bebbo update the cost for mult when lsl is used.
         
    case MULT:
                {
            rtx op = XEXP(x, 0);
            if (CONST_INT_P(op) && exact_log2(INTVAL(op)))
              *total = 4;
            else
              *total = 12;
            return true;
                }
                break;

         
I try it for div but it's a little more complex and less usefull. EXTERNAL LINK       
       
EDIT: I make some tests with mul  EXTERNAL LINK       
   
Is lea better than lsl (as with -m68020) ? I try with -fbbb=+V and it's not the case.
     
Mul7 can be optimised by inverting d0 and d1.
   
Mul should be prioritized over 3 instructions (4 for buggy mul7) for -m68080 ? Mul.w is smaller (4 bytes) than 3 instructions (6 bytes).
     
-m68040 doesn't activate -fomit-frame-pointer as default as all others.
     


Grom 68k

Posts 61
22 Aug 2019 13:16


There is a bug in commit: n-fpx -> -(fpx-n)
Compilation fails with return 15-d;  EXTERNAL LINK


Grom 68k

Posts 61
22 Aug 2019 13:27


Commit "use human readable numbers for float constants"
 
  If REAL_VALUE_TO_TARGET_DOUBLE is used too in ASM program, the original value could be better.  EXTERNAL LINK


Grom 68k

Posts 61
22 Aug 2019 13:41


Commit "use shorter FP constants if possible"

I think it is better to use FP_REG_P instead of REG_P but I don't achieve to create a bug.

  if (REG_P (operands[2]))
    return "f%&mul%.x %2,%0";
  if (GET_CODE (operands[2]) == CONST_DOUBLE)
    return print_fp_const("f%&mul%.", "<FP:prec>", operands[2]);
  return "f%&mul%.d %f2,%0";



Grom 68k

Posts 61
22 Aug 2019 14:23


Commit "raise costs for shift with constants > 8"
 
  EXTERNAL LINK   
 
  The while should be replaced by an if (*total=8 max).
 
 
  *total = 4;
    rtx op = XEXP(x, 1);
    if (CONST_INT_P(op))
      {
        int n = INTVAL(op);
        while (n -= 8 > 0)
          *total += 4;
      }

EDIT:

*(1<<31) is weird (signed bit).

I think, there is a problem with unsigned mul. mulu is never used.

posts 367page  1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19