I took the same video (shot in 4K on my iPhone), imported it to my 2016 15 inch MacBook Pro 2.7 ghz / Radeon Pro 460, and rendered it on both devices using iMovie to the devices storage. First 4K to 4K saving on local storage (the Mac was faster by around 40%. Then rendered 4K to 1080p saving to local storage, and the iPhone 7 plus destroyed the Mac! iPhone rendered it to a .mov file, and the Mac to an .m4v file. I tried this with several differed video files all with the same result. I used iMovie on my Mac because I thought it would be the most fair comparison. I later used FCPX, and it was significantly faster on my Mac then iMovie, but the iPhone still beat it. For some reason iMovie only utilized around 30% of the processor while rendering (FCPX usually utilizes around 70%). Does anyone know why? Is it because the original file was .mov, and the iPhone kept is as .mov while the Mac rendered it to m4v? Was it because iMovie on iPhone attempts to render in a way that is less resource intensive due to the ARM processor? Could it be that the tight control of the iPhone (apple produces the processor and and software to work extremely efficiently together), while apple can't do this as effectively on the Mac due to intel chips?