将声音(wav)文件从objective c传递到javascript

Passing Sound (wav) file to javascript from objective c

本文关键字:javascript objective 声音 wav 文件      更新时间:2023-09-26

我正在录制一个声音文件(wav格式)在objective C.我想通过这个回Javascript使用objective C stringByEvaluatingJavaScriptFromString。我认为我将不得不将wav文件转换为base64字符串以将其传递给此函数。然后我将不得不转换base64字符串回(wav/blob)格式在javascript传递到音频标签播放它。我不知道该怎么做?也不确定如果这是最好的方式传递波文件回javascript?任何想法都将不胜感激。

好吧,这并不像我预期的那样直接。这就是我是如何做到这一点的。

步骤1:我用AudioRecorder录制了caf格式的音频。

NSArray *dirPaths;
NSString *docsDir;
dirPaths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
docsDir = [dirPaths objectAtIndex:0];
soundFilePath = [docsDir stringByAppendingPathComponent:@"sound.caf"];
NSURL *soundFileURL = [NSURL fileURLWithPath:soundFilePath];
NSDictionary *recordSettings = [NSDictionary dictionaryWithObjectsAndKeys:
    [NSNumber numberWithInt:AVAudioQualityMin],
    AVEncoderAudioQualityKey,
    [NSNumber numberWithInt:16],
    AVEncoderBitRateKey,
    [NSNumber numberWithInt:2],
    AVNumberOfChannelsKey,
    [NSNumber numberWithFloat:44100],
                                AVSampleRateKey,
    nil];
NSError *error = nil;
audioRecorder = [[AVAudioRecorder alloc]
                 initWithURL:soundFileURL
                 settings:recordSettings error:&error];
if(error)
{
    NSLog(@"error: %@", [error localizedDescription]);
} else {
    [audioRecorder prepareToRecord];
}

之后,你只需要调用audioRecorder。Record录制音频。它将被记录下来咖啡格式。如果你想看到我的recordAudio函数,那么它在这里。

  (void) recordAudio
   {
    if(!audioRecorder.recording)
     {
         _playButton.enabled = NO;
         _recordButton.title = @"Stop";
         [audioRecorder record];
         [self animate1:nil finished:nil context:nil];
     }
    else
    {
       [_recordingImage stopAnimating];
       [audioRecorder stop];
       _playButton.enabled = YES;
      _recordButton.title = @"Record";
    }
  }

步骤2:将cafe格式转换为wav格式。我可以使用下面的函数来执行这个操作。

 -(BOOL)exportAssetAsWaveFormat:(NSString*)filePath
{
   NSError *error = nil ;
NSDictionary *audioSetting = [NSDictionary dictionaryWithObjectsAndKeys:
                              [ NSNumber numberWithFloat:44100.0], AVSampleRateKey,
                              [ NSNumber numberWithInt:2], AVNumberOfChannelsKey,
                              [ NSNumber numberWithInt:16], AVLinearPCMBitDepthKey,
                              [ NSNumber numberWithInt:kAudioFormatLinearPCM], AVFormatIDKey,
                              [ NSNumber numberWithBool:NO], AVLinearPCMIsFloatKey,
                              [ NSNumber numberWithBool:0], AVLinearPCMIsBigEndianKey,
                              [ NSNumber numberWithBool:NO], AVLinearPCMIsNonInterleaved,
                              [ NSData data], AVChannelLayoutKey, nil ];
NSString *audioFilePath = filePath;
AVURLAsset * URLAsset = [[AVURLAsset alloc]  initWithURL:[NSURL fileURLWithPath:audioFilePath] options:nil];
if (!URLAsset) return NO ;
AVAssetReader *assetReader = [AVAssetReader assetReaderWithAsset:URLAsset error:&error];
if (error) return NO;
NSArray *tracks = [URLAsset tracksWithMediaType:AVMediaTypeAudio];
if (![tracks count]) return NO;
AVAssetReaderAudioMixOutput *audioMixOutput = [AVAssetReaderAudioMixOutput
                                               assetReaderAudioMixOutputWithAudioTracks:tracks
                                               audioSettings :audioSetting];
if (![assetReader canAddOutput:audioMixOutput]) return NO ;
[assetReader addOutput :audioMixOutput];
if (![assetReader startReading]) return NO;

NSString *title = @"WavConverted";
NSArray *docDirs = NSSearchPathForDirectoriesInDomains (NSDocumentDirectory, NSUserDomainMask, YES);
NSString *docDir = [docDirs objectAtIndex: 0];
NSString *outPath = [[docDir stringByAppendingPathComponent :title]
                     stringByAppendingPathExtension:@"wav" ];
if(![[NSFileManager defaultManager] removeItemAtPath:outPath error:NULL])
{
    return NO;
}
soundFilePath = outPath;
NSURL *outURL = [NSURL fileURLWithPath:outPath];
AVAssetWriter *assetWriter = [AVAssetWriter assetWriterWithURL:outURL
                                                      fileType:AVFileTypeWAVE
                                                         error:&error];
if (error) return NO;
AVAssetWriterInput *assetWriterInput = [ AVAssetWriterInput assetWriterInputWithMediaType :AVMediaTypeAudio
                                                                            outputSettings:audioSetting];
assetWriterInput. expectsMediaDataInRealTime = NO;
if (![assetWriter canAddInput:assetWriterInput]) return NO ;
[assetWriter addInput :assetWriterInput];
if (![assetWriter startWriting]) return NO;

//[assetReader retain];
//[assetWriter retain];
[assetWriter startSessionAtSourceTime:kCMTimeZero ];
dispatch_queue_t queue = dispatch_queue_create( "assetWriterQueue", NULL );
[assetWriterInput requestMediaDataWhenReadyOnQueue:queue usingBlock:^{
    NSLog(@"start");
    while (1)
    {
        if ([assetWriterInput isReadyForMoreMediaData] && (assetReader.status == AVAssetReaderStatusReading)) {
            CMSampleBufferRef sampleBuffer = [audioMixOutput copyNextSampleBuffer];
            if (sampleBuffer) {
                [assetWriterInput appendSampleBuffer :sampleBuffer];
                CFRelease(sampleBuffer);
            } else {
                [assetWriterInput markAsFinished];
                break;
            }
        }
    }
    [assetWriter finishWriting];
    //[self playWavFile];
    NSError *err;
    NSData *audioData = [NSData dataWithContentsOfFile:soundFilePath options: 0 error:&err];
    [self.audioDelegate doneRecording:audioData];
    //[assetReader release ];
    //[assetWriter release ];
    NSLog(@"soundFilePath=%@",soundFilePath);
    NSDictionary *dict = [[NSFileManager defaultManager] attributesOfItemAtPath:soundFilePath error:&err];
    NSLog(@"size of wav file = %@",[dict objectForKey:NSFileSize]);
    //NSLog(@"finish");
}];

在这个函数中,我调用audioDelegate函数doneRecording与audioDatawav格式。下面是doneRecording的代码。

-(void) doneRecording:(NSData *)contents
{
myContents = [[NSData dataWithData:contents] retain];
[self returnResult:alertCallbackId args:@"Recording Done.",nil];
}
// Call this function when you have results to send back to javascript callbacks
 // callbackId : int comes from handleCall function
// args: list of objects to send to the javascript callback
- (void)returnResult:(int)callbackId args:(id)arg, ...;
{
  if (callbackId==0) return;
  va_list argsList;
  NSMutableArray *resultArray = [[NSMutableArray alloc] init];
  if(arg != nil){
    [resultArray addObject:arg];
    va_start(argsList, arg);
    while((arg = va_arg(argsList, id)) != nil)
      [resultArray addObject:arg];
    va_end(argsList);
  }
   NSString *resultArrayString = [json stringWithObject:resultArray allowScalar:YES error:nil];
   [self performSelectorOnMainThread:@selector(stringByEvaluatingJavaScriptFromString:) withObject:[NSString stringWithFormat:@"NativeBridge.resultForCallback(%d,%@);",callbackId,resultArrayString] waitUntilDone:NO];
   [resultArray release];    
}

步骤3:现在是时候沟通回javascript在UIWebView,我们已经完成记录音频,这样你就可以开始接受我们的数据块。我正在使用websockets将数据传输回javascript。数据将以块的形式传输因为服务器(https://github.com/benlodotcom/BLWebSocketsServer),我正在使用,是构建使用libwebsockets (http://git.warmcat.com/cgi-bin/cgit/libwebsockets/)。

这是在委托类中启动服务器的方式。

- (id)initWithFrame:(CGRect)frame 
{
  if (self = [super initWithFrame:frame]) {
      [self _createServer];
      [self.server start];
      myContents = [NSData data];
    // Set delegate in order to "shouldStartLoadWithRequest" to be called
    self.delegate = self;
    // Set non-opaque in order to make "body{background-color:transparent}" working!
    self.opaque = NO;
    // Instanciate JSON parser library
    json = [ SBJSON new ];
    // load our html file
    NSString *path = [[NSBundle mainBundle] pathForResource:@"webview-document" ofType:@"html"];
    [self loadRequest:[NSURLRequest requestWithURL:[NSURL fileURLWithPath:path]]];

  }
  return self;
}
-(void) _createServer
{
    /*Create a simple echo server*/
    self.server = [[BLWebSocketsServer alloc] initWithPort:9000 andProtocolName:echoProtocol];
    [self.server setHandleRequestBlock:^NSData *(NSData *data) {
        NSString *convertedString = [[NSString alloc] initWithData:data encoding:NSUTF8StringEncoding];
        NSLog(@"Received Request...%@",convertedString);
        if([convertedString isEqualToString:@"start"])
        {
            NSLog(@"myContents size: %d",[myContents length]);
            int contentSize = [myContents length];
            int chunkSize = 64*1023;
            chunksCount = ([myContents length]/(64*1023))+1;
            NSLog(@"ChunkSize=%d",chunkSize);
            NSLog(@"chunksCount=%d",chunksCount);
            chunksArray =  [[NSMutableArray array] retain];
            int index = 0;
            //NSRange chunkRange;
            for(int i=1;i<=chunksCount;i++)
            {
                if(i==chunksCount)
                {
                    NSRange chunkRange = {index,contentSize-index};
                    NSLog(@"chunk# = %d, chunkRange=(%d,%d)",i,index,contentSize-index);
                    NSData *dataChunk = [myContents subdataWithRange:chunkRange];
                    [chunksArray addObject:dataChunk];
                    break;
                }
                else
                {
                    NSRange chunkRange = {index, chunkSize};
                    NSLog(@"chunk# = %d, chunkRange=(%d,%d)",i,index,chunkSize);
                    NSData *dataChunk = [myContents subdataWithRange:chunkRange];
                    index += chunkSize;
                    [chunksArray addObject:dataChunk];
                }
            }
            return [chunksArray objectAtIndex:0];
        }
        else
        {
            int chunkNumber = [convertedString intValue];
            if(chunkNumber>0 && (chunkNumber+1)<=chunksCount)
            {
                return [chunksArray objectAtIndex:(chunkNumber)];
            }

        }
        NSLog(@"Releasing Array");
        [chunksArray release];
        chunksCount = 0;
        return [NSData dataWithBase64EncodedString:@"Stop"];
    }];
}
javascript端

代码为

var socket;
var chunkCount = 0;
var soundBlob, soundUrl;
var smallBlobs = new Array();
function captureMovieCallback(response)
{
    if(socket)
    {
        try{
            socket.send('start');
        }
        catch(e)
        {
            log('Socket is not valid object');
        }
    }
    else
    {
        log('socket is null');
    }
}
function closeSocket(response)
{
    socket.close();
}

function connect(){
    try{
        window.WebSocket = window.WebSocket || window.MozWebSocket;
        socket = new WebSocket('ws://127.0.0.1:9000',
                                      'echo-protocol');
        socket.onopen = function(){
        }
        socket.onmessage = function(e){
            var data = e.data;
            if(e.data instanceof ArrayBuffer)
            {
                log('its arrayBuffer');
            }
            else if(e.data instanceof Blob)
            {
                if(soundBlob)
                   log('its Blob of size = '+ e.data.size + ' final blob size:'+ soundBlob.size);
                if(e.data.size != 3)
                {
                    //log('its Blob of size = '+ e.data.size);
                    smallBlobs[chunkCount]= e.data;
                    chunkCount = chunkCount +1;
                    socket.send(''+chunkCount);
                }
                else
                {
                    //alert('End Received');
                    try{
                    soundBlob = new Blob(smallBlobs,{ "type" : "audio/wav" });
                    var myURL = window.URL || window.webkitURL;
                    soundUrl = myURL.createObjectURL(soundBlob);
                    log('soundURL='+soundUrl);
                    }
                    catch(e)
                    {
                        log('Problem creating blob and url.');
                    }
                    try{
                        var serverUrl = 'http://10.44.45.74:8080/MyTestProject/WebRecording?record';
                        var xhr = new XMLHttpRequest();
                        xhr.open('POST',serverUrl,true);
                        xhr.setRequestHeader("content-type","multipart/form-data");
                        xhr.send(soundBlob);
                    }
                    catch(e)
                    {
                        log('error uploading blob file');
                    }
                    socket.close();
                }
                //alert(JSON.stringify(msg, null, 4));
            }
            else
            {
                log('dont know');
            }
        }
        socket.onclose = function(){
            //message('<p class="event">Socket Status: '+socket.readyState+' (Closed)');
            log('final blob size:'+soundBlob.size);
        }
    } catch(exception){
       log('<p>Error: '+exception);
    }
}
function log(msg) {
    NativeBridge.log(msg);
}
function stopCapture() {
    NativeBridge.call("stopMovie", null,null);
}
function startCapture() {
    NativeBridge.call("captureMovie",null,captureMovieCallback);
}

NativeBridge.js

var NativeBridge = {
  callbacksCount : 1,
  callbacks : {},
  // Automatically called by native layer when a result is available
  resultForCallback : function resultForCallback(callbackId, resultArray) {
    try {

    var callback = NativeBridge.callbacks[callbackId];
    if (!callback) return;
    console.log("calling callback for "+callbackId);
    callback.apply(null,resultArray);
    } catch(e) {alert(e)}
  },
  // Use this in javascript to request native objective-c code
  // functionName : string (I think the name is explicit :p)
  // args : array of arguments
  // callback : function with n-arguments that is going to be called when the native code returned
  call : function call(functionName, args, callback) {
    //alert("call");
    //alert('callback='+callback);
    var hasCallback = callback && typeof callback == "function";
    var callbackId = hasCallback ? NativeBridge.callbacksCount++ : 0;
    if (hasCallback)
      NativeBridge.callbacks[callbackId] = callback;
    var iframe = document.createElement("IFRAME");
    iframe.setAttribute("src", "js-frame:" + functionName + ":" + callbackId+ ":" + encodeURIComponent(JSON.stringify(args)));
    document.documentElement.appendChild(iframe);
    iframe.parentNode.removeChild(iframe);
    iframe = null;
  },
    log : function log(message) {
        var iframe = document.createElement("IFRAME");
        iframe.setAttribute("src", "ios-log:"+encodeURIComponent(JSON.stringify("#iOS#" + message)));
        document.documentElement.appendChild(iframe);
        iframe.parentNode.removeChild(iframe);
        iframe = null;
    }
};
  1. 在javascript端调用connect()在html端调用body load

  2. 一旦我们从startCapture函数接收回调(captureMovieCallback),我们发送

  3. 目标c端的
  4. 服务器将wav音频数据分割成chunsize =60*1023的小块

  5. 将第一个块发送回javascript端。

  6. javascript接受这个块,并从服务器发送它需要的下一个块的数量

  7. 服务器发送由这个数字表示的块。这个过程不断重复,直到我们

  8. 最后,我们发送停止消息返回到javascript端,表明我们已经完成了。它显然是3字节的大小(这被用作打破这个循环的标准)

  9. 每个块以小blob的形式存储在数组中。现在我们用这些来做一个更大的团使用下面的行

    soundBlob = new Blob(smallblob,{"type": "audio/wav"});

    这个blob被上传到服务器,服务器将这个blob作为wav文件写入。我们可以将url作为音频标签的src传递给wav文件,以便在javascript端重播。

  10. 我们关闭websocket连接后发送blob到服务器。

如果你想做的只是播放声音,那么你最好使用iOS中的一个本地音频播放系统,而不是HTML音频标签。