I'd say it's better to call it a unit of counting.
If I have a bin of apples, and I say it's 5 apples wide, and 4 apples tall, then you'd say I have 20 apples, not 20 apples squared.
It's common to specify a length by a count of items passed along that length. Eg, a city block is a ~square on the ground bounded by roads. Yet if you're traveling in a city, you might say "I walked 5 blocks." This is a linguistic shortcut, skipping implied information. If you're trying to talk about both in a unclear context, additional words to clarify are required to sufficiently convey the information, that's just how language words.
Exactly. Pixels are indivisible quanta, not units of any kind of distance. Saying pixel^2 makes as much sense as counting the number of atoms on the surface of a metal and calling it atoms^2.
Is it that, or is it a compound unit that has a defined width and height already? Something can be five football fields long by two football fields wide, for an area of ten football fields.
No, it is a count. Pixels can have different sizes and shapes, just like apples. Technically football fields vary slightly too but not close to as much as apples or pixels.
Pixel counts generally represent areas by taking the number of pixels inside a region of the plane, but they can represent lengths by taking the number of pixels inside a certain extent of a single line or column of the grid: it is, actually, a thin rectangle.
Yes, city blocks are like pixels or apples. They do not have a standard size or shape.
Edit: To clarify, if someone says 3 blocks that could vary by like a factor of like 3 or in extreme caesx more so when used as a unit of length it is a very rough estimate. It is usually used in my country as a way to know when you have reached your destination.
A pixel is two dimensional, by definition. It is a unit of area. Even in the signal processing "sampling" definition of a pixel, it still has an areal density an is therefore still two-dimensional.
The problem in this article is it incorrectly assumes a pixel to be a length and then makes nonsensical statements. The correct way to interpret "1920 pixels wide" is "the same width as 1920 pixels arranged in a 1920 by 1 row".
In the same way that "square feet" means "feet^2" as "square" acts as a square operator on "feet", in "pixels wide" the word "wide" acts as a square root operator on the area and means "pixels^(-2)" (which doesn't otherwise have a name).
> This is an issue that strikes right at the root of correct image (sprite) computing and the ability to correctly integrate (converge) the discrete and the continuous. The little square model is simply incorrect. It harms. It gets in the way. If you find yourself thinking that a pixel is a little square, please read this paper.
> A pixel is a point sample. It exists only at a point. For a color picture, a pixel might actually contain three samples, one for each primary color contributing to the picture at the sampling point. We can still think of this as a point sample of a color. But we cannot think of a pixel as a square—or anything other than a point.
A pixel is simply not a point sample. A camera does not take point sample snapshots, it integrates lightfall over little rectangular areas. A modern display does not reconstruct an image the way a DAC reconstructs sounds, they render little rectangles of light, generally with visible XY edges.
The paper's claim applies at least somewhat sensibly to CRTs, but one mustn't imagine the voltage interpolation and shadow masking a CRT does corresponds meaningfully to how modern displays work... and even for CRTs it was never actually correct to claim that pixels were point samples.
It is pretty reasonable in the modern day to say that an idealized pixel is a little square. A lot of graphics operates under this simplifying assumption, and it works better than most things in practice.
> A camera does not take point sample snapshots, it integrates lightfall over little rectangular areas.
Integrates this information into what? :)
> A modern display does not reconstruct an image the way a DAC reconstructs sounds
Sure, but some software may apply resampling over the original signal for the purposes of upscaling, for example. "Pixels as samples" makes more sense in that context.
> It is pretty reasonable in the modern day to say that an idealized pixel is a little square.
I do agree with this actually. A "pixel" in popular terminology is a rectangular subdivision of an image, leading us right back to TFA. The term "pixel art" makes sense with this definition.
Perhaps we need better names for these things. Is the "pixel" the name for the sample, or is it the name of the square-ish thing that you reconstruct from image data when you're ready to send to a display?
Into electric charge? I don’t understand the question, and it sounds like the question is supposed to lead readers somewhere.
The camera integrates incoming light into a tiny square into an electric charge and then reads out the charge (at least for a CCD), giving a brightness (and with the Bayer filter in front of the sensor, a color) for the pixel. So it’s a measurement over the tiny square, not a point sample.
> The camera integrates incoming light into a tiny square [...] giving a brightness (and with the Bayer filter in front of the sensor, a color) for the pixel
This is where I was trying to go. The pixel, the result at the end of all that, is the single value (which may be a color with multiple components, sure). The physical reality of the sensor having an area and generating a charge is not relevant to the signal processing that happens after that. For Smith, he's saying that this sample is best understood as a point, rather than a rectangle. This makes more sense for Smith, who was working in image processing within software, unrelated to displays and sensors.
It’s a single value, but it’s an integral over the square, not a point sample. If a shine a perfectly focused laser very close to the corner of one sensor pixel, I’ll still get a brightness value for the pixel. If it were a point sample, only the brightness at a single point would give an output.
And depending on your application, you absolutely need to account for sensor properties like pixel pitch and color filter array. It affects moire pattern behavior and creates some artifacts.
I’m not saying you can’t think of a pixel as a point sample, but correcting other people who say it’s a little square is just wrong.
A slightly tangential comment: integrating a continuous image on squares paving the image plane might be best viewed as applying a box filter to the continuous image, resulting in another continuous image, then sampling it point-wise at the center of each square.
It turns out that when you view things that way, pixels as points continues to make sense.
The representation of pixels on the screen is not necessarily normative for the definition of the pixel. Indeed, since different display devices use different representations as you point out, it can't really be. You have to look at the source of the information. Is it a hit mask for a game? Then they are squares. Is it a heatmap of some analytical function? Then they are points. And so on.
The article starts out with an assertion right in the title and does not do enough to justify it. The title is just wrong. Saying pixels are like metres is like saying metres are like apples.
When you multiply 3 meter by 4 meter, you do not get 12 meters. You get 12 meter squared. Because "meter" is not a discrete object. It's a measurement.
When you have points A, B, C. And you create 3 new "copies" of those points (by geometric manipulation like translating or rotating vectors to those points), you now have 12 points: A, B, C, A1, B1, C1, A2, B2, C2, A3, B3, C3. You don't get "12 points squared". (What would that even mean?) Because points are discrete objects.
When you have 3 apples in a row and you add 3 more such rows, you get 4 rows of 3 apples each. You now have 12 apples. You don't have "12 apples squared". Because apples are discrete objects.
When you have 3 pixels in a row and you add 3 more such rows of pixels, you get 4 rows of 3 pixels each. You now have 12 pixels. You don't get "12 pixels squared". Because pixels are discrete objects.
Pixels are like points and apples. Pixels are not like metres.
> When you multiply 3 meter by 4 meter, you do not get 12 meters. You get 12 meter squared.
"12 meter(s) squared" sounds like a square that is 12 meters on each side. On the other hand, "12 square meters" avoids this weirdness by sounding like 12 squares that are one meter on each side, which the area you're actually describing.
But it does highlight that the common terminology is imperfect and breaks the regularity that scientists come to expect when working with physical units in calculations
Scientists and engineers dont actually expect much, they make a lot of mistakes, are not very rigorous, not demanding towards each others. It is common for Units to be wrong, context defined, socially dependent and even sometimes added together when the operator + hasn't been properly defined
A pixel is a dot. The size and shape of the dot is implementation-dependent.
The dot may be physically small, or physically large, and it may even be non-square (I used to work for a camera company that had non-square pixels in one of its earlier DSLRs, and Bayer-format sensors can be thought of as “non-square”), so saying a pixel is a certain size, as a general measure across implementations, doesn’t really make sense.
In iOS and MacOS, we use “display units,” which can be pixels, or groups of pixels. The ratio usually changes, from device to device.
> That means the pixel is a dimensionless unit that is just another name for 1, kind of like how the radian is length divided by length so it also equals one, and the steradian is area divided by area which also equals one.
But then for some reason decides to ignore it. I don’t understand this article. Yes, pixels are dimensionless units used for counting, not measuring. Their shape and internal structure is irrelevant (even subpixel rendering doesn’t actually deal with fractions - it alters neighbors to produce the effect).
Do people in Spanish cities with strong grids (eg Barcelona) not also use the local language equivalent of "blocks" as a term? I would be surprised if not. It's a fundamentally convenient term in any area that has a repeated grid.
The fact that some cities don't have repeated grids and hence don't use the term is not really a valuable corrective to the post you are replying to.
While it is certainly more common in the US we occasionally use blocks as a measurement here in Sweden too. Blocks are just smaller and less regular here.
Pixel, used as a unit of horizontal or vertical resolution, typically implies the resolution of the other axis as well, at least up to common aspect ratios. We used to say 640x480 or 1280x1024 – now we might say 1080p or 2.5K but what we mean is 1920x1080 and 2560x1440, so "pixel" does appear to be a measure of area. Except of course it's not – it's a unit of a dimensionless quantity that measures the amount of something, like the mole. Still, a "quadratic count" is in some sense a quantity distinct from "linear count", just like angles and solid angles are distinct even though both are dimensionless quantities.
The issue is muddied by the fact that what people mostly care about is either the linear pixel count or pixel pitch, the distance between two neighboring pixels (or perhaps rather its reciprocal, pixels per unit length). Further confounding is that technically, resolution is a measure of angular separation, and to convert pixel pitch to resolution you need to know the viewing distance.
Digital camera manufacturers at some point started using megapixels (around the point that sensor resolutions rose above 1 MP), presumably because big numbers are better marketing. Then there's the fact that camera screen and electronic viewfinder resolutions are given in subpixels, presumably again for marketing reasons.
Digital photography then takes us on to subpixels, Bayer filters (https://en.wikipedia.org/wiki/Color_filter_array) and so on. You can also divide the luminance colour parts out. Most image and video compression puts more emphasis on the luminance profile, getting the colour more approximate. The subpixels on a digital camera (or a display for that matter) take advantage of this quirk of human vision.
A pixel is neither a unit of length nor area, it is like a byte, a unit of information.
Sometimes, it is used as a length or area, omitting a conversion constant, but we do it all the times, the article gives out the mass vs force as an example.
Also worth mentioning that pixels are not always square. For example, the once popular 320x200 resolution have pixels taller than they are wide.
I’m surprised the author didn’t dig into the fact that not all pixels are square. Or that pixels are made of underlying RGB light emitters. And that those RGB emitters are often very non-square. And often not 1:1 RGBEmitter-to-Pixel (stupid pentile).
A pixel is a sample or a collection of values of the Red, Green, and Blue components of light captured at a particular location in a typically rectangular area. Pixels have no physical dimensions. A camera sensor has no pixels, it has photosites (four colour sensitive elements per one rectangular area).
A photosite is a set of four photosensitive electronic sensors that register levels of RGB components of light https://www.cambridgeincolour.com/tutorials/camera-sensors.h...
The camera sensor turns data captured by a a single photosite into a single data structure (a pixel), a tuple of as many discreet values as there are components in a given colour space (three for RGB).
If you want to be pedantic, you shouldn’t say that the photosite has 4 sensors, depending on the color filter array you can have other numbers like 9 or 36, too.
And the difference is pure pedantry, because each photosite corresponds to a pixel in the image (unless we’re talking about lens correction?). It’s like making up a new word for monitor pixels because those are little lights (for OLED) while the pixel is just a tuple of numbers. I don’t see why calling the sensor grid items „pixels“ is misunderstandable in any way.
You are right about the differences in the number of sensors, there may be more. I prefer to talk about photosites, because additional properties like photosite size or sensor photosite density help me make better decisions when I'm selecting cameras/sensors for a photo project. For example, a 24MP M43 sensor is not the same as a 24MP APS-C or FF sensor, even though the image files they produce have the same number of pixels. Similarly, a 36MP FF sensor is essentially the same a 24MP APS-C sensor, it produces image files that contain more pixels from a wider field of view, but the resolution of the sensor stays the same, because both sensors have the same photosite density (if you pair the same lens with both sensors).
See also:
A Pixel Is Not a Little Square (1995) [pdf] – http://alvyray.com/Memos/CG/Microsoft/6_pixel.pdf
I'd say it's better to call it a unit of counting.
If I have a bin of apples, and I say it's 5 apples wide, and 4 apples tall, then you'd say I have 20 apples, not 20 apples squared.
It's common to specify a length by a count of items passed along that length. Eg, a city block is a ~square on the ground bounded by roads. Yet if you're traveling in a city, you might say "I walked 5 blocks." This is a linguistic shortcut, skipping implied information. If you're trying to talk about both in a unclear context, additional words to clarify are required to sufficiently convey the information, that's just how language words.
Exactly. Pixels are indivisible quanta, not units of any kind of distance. Saying pixel^2 makes as much sense as counting the number of atoms on the surface of a metal and calling it atoms^2.
Is it that, or is it a compound unit that has a defined width and height already? Something can be five football fields long by two football fields wide, for an area of ten football fields.
No, it is a count. Pixels can have different sizes and shapes, just like apples. Technically football fields vary slightly too but not close to as much as apples or pixels.
Pixel counts generally represent areas by taking the number of pixels inside a region of the plane, but they can represent lengths by taking the number of pixels inside a certain extent of a single line or column of the grid: it is, actually, a thin rectangle.
What's the standard size of a city block, the other countable example given by the original author?
Yes, city blocks are like pixels or apples. They do not have a standard size or shape.
Edit: To clarify, if someone says 3 blocks that could vary by like a factor of like 3 or in extreme caesx more so when used as a unit of length it is a very rough estimate. It is usually used in my country as a way to know when you have reached your destination.
A pixel is two dimensional, by definition. It is a unit of area. Even in the signal processing "sampling" definition of a pixel, it still has an areal density an is therefore still two-dimensional.
The problem in this article is it incorrectly assumes a pixel to be a length and then makes nonsensical statements. The correct way to interpret "1920 pixels wide" is "the same width as 1920 pixels arranged in a 1920 by 1 row".
In the same way that "square feet" means "feet^2" as "square" acts as a square operator on "feet", in "pixels wide" the word "wide" acts as a square root operator on the area and means "pixels^(-2)" (which doesn't otherwise have a name).
> A Pixel Is Not A Little Square!
> This is an issue that strikes right at the root of correct image (sprite) computing and the ability to correctly integrate (converge) the discrete and the continuous. The little square model is simply incorrect. It harms. It gets in the way. If you find yourself thinking that a pixel is a little square, please read this paper.
> A pixel is a point sample. It exists only at a point. For a color picture, a pixel might actually contain three samples, one for each primary color contributing to the picture at the sampling point. We can still think of this as a point sample of a color. But we cannot think of a pixel as a square—or anything other than a point.
Alvy Ray Smith, 1995 http://alvyray.com/Memos/CG/Microsoft/6_pixel.pdf
A pixel is simply not a point sample. A camera does not take point sample snapshots, it integrates lightfall over little rectangular areas. A modern display does not reconstruct an image the way a DAC reconstructs sounds, they render little rectangles of light, generally with visible XY edges.
The paper's claim applies at least somewhat sensibly to CRTs, but one mustn't imagine the voltage interpolation and shadow masking a CRT does corresponds meaningfully to how modern displays work... and even for CRTs it was never actually correct to claim that pixels were point samples.
It is pretty reasonable in the modern day to say that an idealized pixel is a little square. A lot of graphics operates under this simplifying assumption, and it works better than most things in practice.
> A camera does not take point sample snapshots, it integrates lightfall over little rectangular areas.
Integrates this information into what? :)
> A modern display does not reconstruct an image the way a DAC reconstructs sounds
Sure, but some software may apply resampling over the original signal for the purposes of upscaling, for example. "Pixels as samples" makes more sense in that context.
> It is pretty reasonable in the modern day to say that an idealized pixel is a little square.
I do agree with this actually. A "pixel" in popular terminology is a rectangular subdivision of an image, leading us right back to TFA. The term "pixel art" makes sense with this definition.
Perhaps we need better names for these things. Is the "pixel" the name for the sample, or is it the name of the square-ish thing that you reconstruct from image data when you're ready to send to a display?
> Integrates this information into what? :)
Into electric charge? I don’t understand the question, and it sounds like the question is supposed to lead readers somewhere.
The camera integrates incoming light into a tiny square into an electric charge and then reads out the charge (at least for a CCD), giving a brightness (and with the Bayer filter in front of the sensor, a color) for the pixel. So it’s a measurement over the tiny square, not a point sample.
> The camera integrates incoming light into a tiny square [...] giving a brightness (and with the Bayer filter in front of the sensor, a color) for the pixel
This is where I was trying to go. The pixel, the result at the end of all that, is the single value (which may be a color with multiple components, sure). The physical reality of the sensor having an area and generating a charge is not relevant to the signal processing that happens after that. For Smith, he's saying that this sample is best understood as a point, rather than a rectangle. This makes more sense for Smith, who was working in image processing within software, unrelated to displays and sensors.
It’s a single value, but it’s an integral over the square, not a point sample. If a shine a perfectly focused laser very close to the corner of one sensor pixel, I’ll still get a brightness value for the pixel. If it were a point sample, only the brightness at a single point would give an output.
And depending on your application, you absolutely need to account for sensor properties like pixel pitch and color filter array. It affects moire pattern behavior and creates some artifacts.
I’m not saying you can’t think of a pixel as a point sample, but correcting other people who say it’s a little square is just wrong.
A slightly tangential comment: integrating a continuous image on squares paving the image plane might be best viewed as applying a box filter to the continuous image, resulting in another continuous image, then sampling it point-wise at the center of each square.
It turns out that when you view things that way, pixels as points continues to make sense.
The representation of pixels on the screen is not necessarily normative for the definition of the pixel. Indeed, since different display devices use different representations as you point out, it can't really be. You have to look at the source of the information. Is it a hit mask for a game? Then they are squares. Is it a heatmap of some analytical function? Then they are points. And so on.
DACs do a zero-order hold, which is equivalent to a pixel as a square.
This is one of my favorite articles. Although I think you can define for yourself what your pixels are, for most it is a point sample.
The article starts out with an assertion right in the title and does not do enough to justify it. The title is just wrong. Saying pixels are like metres is like saying metres are like apples.
When you multiply 3 meter by 4 meter, you do not get 12 meters. You get 12 meter squared. Because "meter" is not a discrete object. It's a measurement.
When you have points A, B, C. And you create 3 new "copies" of those points (by geometric manipulation like translating or rotating vectors to those points), you now have 12 points: A, B, C, A1, B1, C1, A2, B2, C2, A3, B3, C3. You don't get "12 points squared". (What would that even mean?) Because points are discrete objects.
When you have 3 apples in a row and you add 3 more such rows, you get 4 rows of 3 apples each. You now have 12 apples. You don't have "12 apples squared". Because apples are discrete objects.
When you have 3 pixels in a row and you add 3 more such rows of pixels, you get 4 rows of 3 pixels each. You now have 12 pixels. You don't get "12 pixels squared". Because pixels are discrete objects.
Pixels are like points and apples. Pixels are not like metres.
> When you multiply 3 meter by 4 meter, you do not get 12 meters. You get 12 meter squared.
"12 meter(s) squared" sounds like a square that is 12 meters on each side. On the other hand, "12 square meters" avoids this weirdness by sounding like 12 squares that are one meter on each side, which the area you're actually describing.
A pixel is a dot. The size and shape of the dot is implementation-dependent.
The dot may be physically small, or physically large, and it may even be non-square (I used to work for a camera company that had non-square pixels in one of its earlier DSLRs, and Bayer-format sensors can be thought of as “non-square”), so saying a pixel is a certain size, as a general measure across implementations, doesn’t really make sense.
In iOS and MacOS, we use “display units,” which can be pixels, or groups of pixels. The ratio usually changes, from device to device.
This is my favourite single-pixel interface: https://en.wikipedia.org/wiki/Trafficators
They are sort of coming back, as side-mirror indicators.
So, the author answers the question:
> That means the pixel is a dimensionless unit that is just another name for 1, kind of like how the radian is length divided by length so it also equals one, and the steradian is area divided by area which also equals one.
But then for some reason decides to ignore it. I don’t understand this article. Yes, pixels are dimensionless units used for counting, not measuring. Their shape and internal structure is irrelevant (even subpixel rendering doesn’t actually deal with fractions - it alters neighbors to produce the effect).
Happens to all square shapes.
A chessboard is 8 tiles wide and 8 tiles long, so it consists of 64 tiles covering an area of, well, 64 tiles.
Not all pixels are square, though! Does anyone remember anamorphic DVDs? https://en.wikipedia.org/wiki/Anamorphic_widescreen
City blocks, too.
In the US...
Do people in Spanish cities with strong grids (eg Barcelona) not also use the local language equivalent of "blocks" as a term? I would be surprised if not. It's a fundamentally convenient term in any area that has a repeated grid.
The fact that some cities don't have repeated grids and hence don't use the term is not really a valuable corrective to the post you are replying to.
While it is certainly more common in the US we occasionally use blocks as a measurement here in Sweden too. Blocks are just smaller and less regular here.
Pixel, used as a unit of horizontal or vertical resolution, typically implies the resolution of the other axis as well, at least up to common aspect ratios. We used to say 640x480 or 1280x1024 – now we might say 1080p or 2.5K but what we mean is 1920x1080 and 2560x1440, so "pixel" does appear to be a measure of area. Except of course it's not – it's a unit of a dimensionless quantity that measures the amount of something, like the mole. Still, a "quadratic count" is in some sense a quantity distinct from "linear count", just like angles and solid angles are distinct even though both are dimensionless quantities.
The issue is muddied by the fact that what people mostly care about is either the linear pixel count or pixel pitch, the distance between two neighboring pixels (or perhaps rather its reciprocal, pixels per unit length). Further confounding is that technically, resolution is a measure of angular separation, and to convert pixel pitch to resolution you need to know the viewing distance.
Digital camera manufacturers at some point started using megapixels (around the point that sensor resolutions rose above 1 MP), presumably because big numbers are better marketing. Then there's the fact that camera screen and electronic viewfinder resolutions are given in subpixels, presumably again for marketing reasons.
Digital photography then takes us on to subpixels, Bayer filters (https://en.wikipedia.org/wiki/Color_filter_array) and so on. You can also divide the luminance colour parts out. Most image and video compression puts more emphasis on the luminance profile, getting the colour more approximate. The subpixels on a digital camera (or a display for that matter) take advantage of this quirk of human vision.
A pixel is neither a unit of length nor area, it is like a byte, a unit of information.
Sometimes, it is used as a length or area, omitting a conversion constant, but we do it all the times, the article gives out the mass vs force as an example.
Also worth mentioning that pixels are not always square. For example, the once popular 320x200 resolution have pixels taller than they are wide.
This depends upon who you ask. CSS defines a pixel as an angle:
https://www.w3.org/TR/css-values-3/#reference-pixel
That's the definition of a "reference pixel", not a pixel. They actually refer to a pixel (and the angle) in the definition.
The author is very confused.
A Pixel is a telephone.
Reminds me of this Numberphile w/ Cliff Stoll [1]: The Nescafé Equation (43 coffee beans)
[1] https://youtu.be/3V84Bi-mzQM
This is a fun post by Nayuki - I'd never given this much thought, but this takes the premise and runs with it
Wait till they hear about fluid ounces.
I’m surprised the author didn’t dig into the fact that not all pixels are square. Or that pixels are made of underlying RGB light emitters. And that those RGB emitters are often very non-square. And often not 1:1 RGBEmitter-to-Pixel (stupid pentile).
> "Je n’ai fait celle-ci plus longue que parce que je n’ai pas eu le loisir de la faire plus courte."
or
> "I have made this longer than usual because I have not had time to make it shorter."
A pixel is a sample or a collection of values of the Red, Green, and Blue components of light captured at a particular location in a typically rectangular area. Pixels have no physical dimensions. A camera sensor has no pixels, it has photosites (four colour sensitive elements per one rectangular area).
And what’s the difference between a photosite and a pixel? Sounds like a difference made up to correct other people.
A photosite is a set of four photosensitive electronic sensors that register levels of RGB components of light https://www.cambridgeincolour.com/tutorials/camera-sensors.h... The camera sensor turns data captured by a a single photosite into a single data structure (a pixel), a tuple of as many discreet values as there are components in a given colour space (three for RGB).
If you want to be pedantic, you shouldn’t say that the photosite has 4 sensors, depending on the color filter array you can have other numbers like 9 or 36, too.
And the difference is pure pedantry, because each photosite corresponds to a pixel in the image (unless we’re talking about lens correction?). It’s like making up a new word for monitor pixels because those are little lights (for OLED) while the pixel is just a tuple of numbers. I don’t see why calling the sensor grid items „pixels“ is misunderstandable in any way.
You are right about the differences in the number of sensors, there may be more. I prefer to talk about photosites, because additional properties like photosite size or sensor photosite density help me make better decisions when I'm selecting cameras/sensors for a photo project. For example, a 24MP M43 sensor is not the same as a 24MP APS-C or FF sensor, even though the image files they produce have the same number of pixels. Similarly, a 36MP FF sensor is essentially the same a 24MP APS-C sensor, it produces image files that contain more pixels from a wider field of view, but the resolution of the sensor stays the same, because both sensors have the same photosite density (if you pair the same lens with both sensors).
Is a pixel not a pixel when it's in a different color space? (HSV, XYZ etc?)
RGB is the most common colour space, but yes, other colour spaces are available.