Conversation
benjaminsavage
left a comment
There was a problem hiding this comment.
I'm very confused. Let's do a call.
| // M' neurons wide and here M is M'/N, L layers tall | ||
| pub async fn neural_network<C, S, const M: usize, const N: usize, const MTimesN: usize>( | ||
| ctx: C, | ||
| last_layer_neurons: &[BitDecomposed<AdditiveShare<Boolean, N>>; M], |
There was a problem hiding this comment.
These are the activations of the last layer of neurons? If so, let's give it a name including that word.
| pub async fn neural_network<C, S, const M: usize, const N: usize, const MTimesN: usize>( | ||
| ctx: C, | ||
| last_layer_neurons: &[BitDecomposed<AdditiveShare<Boolean, N>>; M], | ||
| edge_weights: &[BitDecomposed<AdditiveShare<Boolean, N>>; M], |
There was a problem hiding this comment.
It's very hard to know how to use this data structure.
| Boolean: FieldSimd<N>, | ||
| AdditiveShare<Boolean, N>: BooleanProtocols<C, N>, | ||
| Boolean: FieldSimd<M>, | ||
| AdditiveShare<Boolean, M>: BooleanProtocols<C, M>, |
There was a problem hiding this comment.
Why do we need both N and M vectorization support?
| { | ||
| // use super::step::MultiplicationStep as Step; | ||
| // for each layer we get M*M vector of edge_weights | ||
| let mut mults = ctx.parallel_join(zip(edge_weights.iter(), last_layer_neurons).enumerate().map(|(i, (edge_weight, neuron))| { |
There was a problem hiding this comment.
mults is not a good name. Maybe input_edge_activations?
| let mut num = 0; | ||
| while mults.len() > 1 { | ||
| // Add each of the mults amongst themselves | ||
| for (a, b) in mults.iter().tuples() { | ||
| let (add_result, _) = integer_add::<_, S, N>( | ||
| ctx.narrow(&TwoHundredFiftySixBitOpStep::Bit(M+num)), | ||
| RecordId::from(num), | ||
| &a, | ||
| &b, | ||
| ) | ||
| .await?; | ||
| mults.push(add_result); | ||
| num += 1; | ||
| } | ||
|
|
||
| } |
There was a problem hiding this comment.
Andy already has code that does this (log(n) depth steps, adding each time and thereby dividing the length of the list by 2). Use pub async fn aggregate_values<'ctx, 'fut, C, OV, const B: usize>(
| let mut one_cell = mults[0]; | ||
| while one_cell.len() > 1 { | ||
| let (left, right) = one_cell.split_at((one_cell.len()/2).try_into().unwrap()); | ||
| (one_cell, _) = integer_add::<_, S, N>( | ||
| ctx.narrow(&TwoHundredFiftySixBitOpStep::Bit(M+num)), | ||
| RecordId::FIRST, | ||
| &left, | ||
| &right, | ||
| ) | ||
| .await?; | ||
| num += 1; | ||
| } |
There was a problem hiding this comment.
I'm lost. I don't understand what is happenning here.
| .upgraded_semi_honest((edge_weights, prev_neurons), |ctx, (edge_weights, prev_neurons)| async move { | ||
| let edge_weights1 = BitDecomposed::transposed_from(&edge_weights).unwrap(); | ||
| let prev_neurons1 = BitDecomposed::transposed_from(&prev_neurons).unwrap(); | ||
| let edge_weights = [edge_weights1.clone(), edge_weights1.clone(), edge_weights1.clone(), edge_weights1.clone(), edge_weights1.clone(), edge_weights1.clone(), edge_weights1.clone(), edge_weights1]; |
There was a problem hiding this comment.
What is happenning here?
| // for i in 0..M-1 // For going through all layers | ||
| // for j in 0..N-1 // Current layer | ||
| // for k in 0..N-1 // For previous layer | ||
| // neuron(i*N + j) += neuron((i-1)*N + k) * edge_weight(neuron((i)*N + j), neuron((i-1)*N + k)) | ||
|
|
||
| // M' neurons wide and here M is M'/N, L layers tall |
There was a problem hiding this comment.
Are these comments in sync with the code?
No description provided.