iOS8 Core Image In Swift:自動改善圖像以及內置濾鏡的使用
iOS8 Core Image In Swift:更復雜的濾鏡
iOS8 Core Image In Swift:人臉檢測以及馬賽克
Core Image不僅內置了諸多濾鏡,還能檢測圖像中的人臉,不過Core Image只是檢測,並非識別,檢測人臉是指在圖像中尋找符合人臉特征(只要是個人臉)的區域,識別是指在圖像中尋找指定的人臉(比如某某某的臉)。Core Image在找到符合人臉特征的區域後,會返回該特征的信息,比如人臉的范圍、眼睛和嘴巴的位置等。
class ViewController: UIViewController {<喎?/kf/ware/vc/" target="_blank" class="keylink">vcD48cD4gICAgQElCT3V0bGV0IHZhciBpbWFnZVZpZXc6IFVJSW1hZ2VWaWV3ITwvcD48cD4gICAgbGF6eSB2YXIgb3JpZ2luYWxJbWFnZTogVUlJbWFnZSA9IHs8L3A+PHA+ICAgICAgICByZXR1cm4gVUlJbWFnZShuYW1lZDog"Image")
}()
lazy var context: CIContext = {
return CIContext(options: nil)
}()
......
在viewDidLoad方法裡顯示originalImage:override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view, typically from a nib.
self.imageView.image = originalImage
}
然後就可以准備實現faceDetecting方法了。在Core Image框架中,CIDetector對象提供了對圖像檢測的功能,只需要通過幾個APIs就能完成CIDetector的初始化並得到檢測結果:@IBAction func faceDetecing() {
let inputImage = CIImage(image: originalImage)
let detector = CIDetector(ofType: CIDetectorTypeFace,
context: context,
options: [CIDetectorAccuracy: CIDetectorAccuracyHigh])
var faceFeatures: [CIFaceFeature]!
if let orientation: AnyObject = inputImage.properties()?[kCGImagePropertyOrientation] {
faceFeatures = detector.featuresInImage(inputImage,
options: [CIDetectorImageOrientation: orientation]
) as [CIFaceFeature]
} else {
faceFeatures = detector.featuresInImage(inputImage) as [CIFaceFeature]
}
println(faceFeatures)
......
使用kCGImagePropertyOrientation的時候,可能需要導入ImageIO框架originalImage和context通過懶加載都得到了,在創建CIDetector對象的時候,必須告訴它要檢測的內容,這裡當然是傳CIDetectorTypeFace了,除了CIDetectorTypeFace外,CIDetector還能檢測二維碼;然後傳遞一個context,多個CIDetector可以共用一個context對象;第三個參數是一個字典,我們能夠指定檢測的精度,除了CIDetectorAccuracyHigh以外,還有CIDetectorAccuracyLow,精度高會識別度更高,但識別速度就更慢。創建完CIDetector之後,把要識別的CIImage傳遞給它,在這裡,我判斷了CIImage是否帶有方向的元數據,如果帶的話調用就featuresInImage:options這個方法,因為方向對CIDetector來說至關重要,直接導致識別的成功與否;而有的圖片沒有方向這些元數據,就調用featuresInImage方法,由於這張《生活大爆炸》的圖是不帶方向元數據的,所以是執行的featuresInImage方法,但是大多數情況下應該會用到前者。featuresInImage方法的返回值是一個CIFaceFeature數組,CIFaceFeature包含了面部的范圍、左右眼、嘴巴的位置等,我們通過使用bounds就能標記出面部的范圍。我們很容易寫出這樣的代碼:獲取所有的面部特征用bounds實例化一個UIView把View顯示出來實現出來就像這樣:@IBAction func faceDetecing() {
let inputImage = CIImage(image: originalImage)
let detector = CIDetector(ofType: CIDetectorTypeFace,
context: context,
options: [CIDetectorAccuracy: CIDetectorAccuracyHigh])
var faceFeatures: [CIFaceFeature]!
if let orientation: AnyObject = inputImage.properties()?[kCGImagePropertyOrientation] {
faceFeatures = detector.featuresInImage(inputImage, options: [CIDetectorImageOrientation: orientation]) as [CIFaceFeature]
} else {
faceFeatures = detector.featuresInImage(inputImage) as [CIFaceFeature]
}
println(faceFeatures)
for faceFeature in faceFeatures {
let faceView = UIView(frame: faceFeature.bounds)
faceView.layer.borderColor = UIColor.orangeColor().CGColor
faceView.layer.borderWidth = 2
imageView.addSubview(faceView)
}
}
這樣寫是否可以呢?如果你運行起來會得到這樣的效果:@IBAction func faceDetecing() {
let inputImage = CIImage(image: originalImage)
let detector = CIDetector(ofType: CIDetectorTypeFace,
context: context,
options: [CIDetectorAccuracy: CIDetectorAccuracyHigh])
var faceFeatures: [CIFaceFeature]!
if let orientation: AnyObject = inputImage.properties()?[kCGImagePropertyOrientation] {
faceFeatures = detector.featuresInImage(inputImage, options: [CIDetectorImageOrientation: orientation]) as [CIFaceFeature]
} else {
faceFeatures = detector.featuresInImage(inputImage) as [CIFaceFeature]
}
println(faceFeatures)
// 1.
let inputImageSize = inputImage.extent().size
var transform = CGAffineTransformIdentity
transform = CGAffineTransformScale(transform, 1, -1)
transform = CGAffineTransformTranslate(transform, 0, -inputImageSize.height)
for faceFeature in faceFeatures {
var faceViewBounds = CGRectApplyAffineTransform(faceFeature.bounds, transform)
// 2.
let scaleTransform = CGAffineTransformMakeScale(0.5, 0.5)
faceViewBounds = CGRectApplyAffineTransform(faceViewBounds, scaleTransform)
let faceView = UIView(frame: faceViewBounds)
faceView.layer.borderColor = UIColor.orangeColor().CGColor
faceView.layer.borderWidth = 2
imageView.addSubview(faceView)
}
}
現在看起來就沒有問題了,在第一步裡我們放置了一個調整坐標系統的tranform,在第二步對bounds進行了縮放(等同於把x、y、width、height全部乘以0.5),由於我們知道實際scale是0.5(原圖600像素,imageView寬為300像素),就直接寫死了0.5,但運行後出現了一點點偏移:@IBAction func faceDetecing() {
let inputImage = CIImage(image: originalImage)
let detector = CIDetector(ofType: CIDetectorTypeFace,
context: context,
options: [CIDetectorAccuracy: CIDetectorAccuracyHigh])
var faceFeatures: [CIFaceFeature]!
if let orientation: AnyObject = inputImage.properties()?[kCGImagePropertyOrientation] {
faceFeatures = detector.featuresInImage(inputImage, options: [CIDetectorImageOrientation: orientation]) as [CIFaceFeature]
} else {
faceFeatures = detector.featuresInImage(inputImage) as [CIFaceFeature]
}
println(faceFeatures)
// 1.
let inputImageSize = inputImage.extent().size
var transform = CGAffineTransformIdentity
transform = CGAffineTransformScale(transform, 1, -1)
transform = CGAffineTransformTranslate(transform, 0, -inputImageSize.height)
for faceFeature in faceFeatures {
var faceViewBounds = CGRectApplyAffineTransform(faceFeature.bounds, transform)
// 2.
var scale = min(imageView.bounds.size.width / inputImageSize.width,
imageView.bounds.size.height / inputImageSize.height)
var offsetX = (imageView.bounds.size.width - inputImageSize.width * scale) / 2
var offsetY = (imageView.bounds.size.height - inputImageSize.height * scale) / 2
faceViewBounds = CGRectApplyAffineTransform(faceViewBounds, CGAffineTransformMakeScale(scale, scale))
faceViewBounds.origin.x += offsetX
faceViewBounds.origin.y += offsetY
let faceView = UIView(frame: faceViewBounds)
faceView.layer.borderColor = UIColor.orangeColor().CGColor
faceView.layer.borderWidth = 2
imageView.addSubview(faceView)
}
}
在第二步裡,除了通過寬、高比計算scale外,還計算了x、y軸的偏移,以確保在寬或高縮放的情況下都能正常工作(最後除以2是因為縮放時是居中顯示,上下或左右都各有一半)。編譯、運行,在不同的高度下的效果圖:@IBAction func pixellated() {
// 1.
var filter = CIFilter(name: "CIPixellate")
println(filter.attributes())
let inputImage = CIImage(image: originalImage)
filter.setValue(inputImage, forKey: kCIInputImageKey)
// filter.setValue(max(inputImage.extent().size.width, inputImage.extent().size.height) / 60, forKey: kCIInputScaleKey)
let fullPixellatedImage = filter.outputImage
// let cgImage = context.createCGImage(fullPixellatedImage, fromRect: fullPixellatedImage.extent())
// imageView.image = UIImage(CGImage: cgImage)
// 2.
let detector = CIDetector(ofType: CIDetectorTypeFace,
context: context,
options: nil)
let faceFeatures = detector.featuresInImage(inputImage)
// 3.
var maskImage: CIImage!
for faceFeature in faceFeatures {
println(faceFeature.bounds)
// 4.
let centerX = faceFeature.bounds.origin.x + faceFeature.bounds.size.width / 2
let centerY = faceFeature.bounds.origin.y + faceFeature.bounds.size.height / 2
let radius = min(faceFeature.bounds.size.width, faceFeature.bounds.size.height)
let radialGradient = CIFilter(name: "CIRadialGradient",
withInputParameters: [
"inputRadius0" : radius,
"inputRadius1" : radius + 1,
"inputColor0" : CIColor(red: 0, green: 1, blue: 0, alpha: 1),
"inputColor1" : CIColor(red: 0, green: 0, blue: 0, alpha: 0),
kCIInputCenterKey : CIVector(x: centerX, y: centerY)
])
println(radialGradient.attributes())
// 5.
let radialGradientOutputImage = radialGradient.outputImage.imageByCroppingToRect(inputImage.extent())
if maskImage == nil {
maskImage = radialGradientOutputImage
} else {
println(radialGradientOutputImage)
maskImage = CIFilter(name: "CISourceOverCompositing",
withInputParameters: [
kCIInputImageKey : radialGradientOutputImage,
kCIInputBackgroundImageKey : maskImage
]).outputImage
}
}
// 6.
let blendFilter = CIFilter(name: "CIBlendWithMask")
blendFilter.setValue(fullPixellatedImage, forKey: kCIInputImageKey)
blendFilter.setValue(inputImage, forKey: kCIInputBackgroundImageKey)
blendFilter.setValue(maskImage, forKey: kCIInputMaskImageKey)
// 7.
let blendOutputImage = blendFilter.outputImage
let blendCGImage = context.createCGImage(blendOutputImage, fromRect: blendOutputImage.extent())
imageView.image = UIImage(CGImage: blendCGImage)
}
我詳細的分為了7個部分:用CIPixellate濾鏡對原圖先做個完全馬賽克檢測人臉,並保存在faceFeatures中初始化蒙版圖,並開始遍歷檢測到的所有人臉由於我們要基於人臉的位置,為每一張臉都單獨創建一個蒙版,所以要先計算出臉的中心點,對應為x、y軸坐標,再基於臉的寬度或高度給一個半徑,最後用這些計算結果初始化一個CIRadialGradient濾鏡(我將inputColor1的alpha賦值為0,表示將這些顏色值設為透明,因為我不關心除了蒙版以外的顏色,這點和蘋果官網中的例子有太一樣,蘋果將其賦值為了1)由於CIRadialGradient濾鏡創建的是一張無限大小的圖,所以在使用之前先對它進行裁剪(蘋果官網例子中沒有對其裁剪。。),然後把每一張臉的蒙版圖合在一起用CIBlendWithMask濾鏡把馬賽克圖、原圖、蒙版圖混合起來輸出,在界面上顯示運行效果:var scale = min(imageView.bounds.size.width / inputImage.extent().size.width,
imageView.bounds.size.height / inputImage.extent().size.height)
修正radius:let radius = min(faceFeature.bounds.size.width, faceFeature.bounds.size.height) * scale
修正後的馬賽克效果與人臉檢測效果:參考資料:
https://developer.apple.com/library/mac/documentation/graphicsimaging/conceptual/CoreImaging/ci_intro/ci_intro.html